Welcome to the newest episode of The Cloud Pod podcast – where the forecast is always cloudy! Ryan, Jonathan, and Matt are your hosts this week as we discuss all things cloud, including updates to Terraform, pricing updates in GCP SCC, AWS Blueprint, DMS Serverless, and Snowball – as well as all the discussion on Microsoft quantum safe computing and ethical AI you could possibly want!
A big thanks to this week’s sponsor:
Foghorn Consulting, provides top-notch cloud and DevOps engineers to the world’s most innovative companies. Initiatives stalled because you have trouble hiring? Foghorn can be burning down your DevOps and Cloud backlogs as soon as next week.
📰News this Week:📰
00:57 Terraform AWS provider updates to V 5.0
- Announced this week from Hashicorp, Terraform AWS provider updates to version 5.0
- The updates include support that they say will help them “focus on improving the user experience.”
- Support & improvements for general tags was added, which can now be set at the provider level – applying them across all resources.
- Thanks to new features in Terraform plugin SDK and the Terraform plugin framework issues related to inconsistent final plans, identical tags, and perpetual diffs are now solved.
- More information on the default tags can be found on the changelog.
04:11📢 Jonathan – “It’s kind of cool – it’s a neat hack as well as a way of AWS providing a really useful feature without having to do any work on the cloud platform itself. Just implement the tool that does the deploying rather than having a service which could do it for you.”
AWS
05:28 **NEW** AWS DMS Serverless
- Recognizing that many organizations were migrating to cloud platforms due to huge amounts of data, AWS has launched their cloud Database Migration Service back in 2016.
- To make the migration even more seamless, AWS has now announced DMS Serverless.
- AWS DMS Serverless will automatically set up, scale, and manage migration resources – all to make your migrations easier and (hopefully) more cost effective.
- Supports a variety of databases and analytics services, including Amazon Aurora, RDS, S3, Redshift, and DynamoDB among others.
06:36📢 Matt- “I was thinking about it at the end of the migration – we finally got it all replicated; now we’re gonna wait a month before we actually cut over. We need this very small change rate, vs. lets go replicate everything at the very beginning. It just kind of keeps it in sync. So in theory, it goes up and down, and you’re not provisioning based on peak capacity.”
07:26 New Snowball Edge Storage Optimized Devices with MORE storage and bandwidth
- AWS Snow family Devices help you move and process data in a cost effective way.
- These new enhanced Snowball Edge Storage Optimized devices are designed for huge amounts of data; petabyte-scale data migration projects.
- They include 210 terabytes of NVMe storage and the ability to transfer up to 1.5 gigabytes of data per second.
- To make these migrations even more efficient, AWS has added a Large Data Migration Program, which will assist sites in making sure they are prepared for the rapid data transfers, as well as setting up a proof of concept migration.
- The idea is to allow customers to set up and deploy migrations to and from Amazon easily.
07:52📢Matt – “I’m just wondering when Snowball Edge Devices are gonna catch up to the snow machine – you know, like the trucks. 100 TB – we gotta be getting close.”
08:03 📢Ryan -”Not until you can drive it. I don’t care how much storage it holds. But I want to be able to drive it around – like anything I order from Amazon.”
09:25 Amazon Security Lake – Now Generally Available
- AWS recently announced the general availability of Security Lake.
- Security Lake will automatically centralize data from AWS environments, SaaS providers, on-premise environments, and clou sources into a purpose-build data lake – all stored in your account.
- AWS says the security lake will make it easier to analyze security data, as well as enabling users to get a more comprehensive view of their security across an entire organization.
- Security Lake will also help organizations meet security requirements.
10:25 📢Ryan – “These are great things; a lot of time this data is being collected anyway, and it’s being stored across many different devices and S3 buckets and it’s all over the place; at least this will put it all in one place where hopefully it’s a little more useable. But also, the main benefit is that it’s going to be easy to visualize the cost of this. Because a lot of security logging isn’t really utilized, but it’s stored – sometimes for a very long period of time – without actually providing any value. Sometimes you can’t even search it. You can’t even hydrate it into something until you have to for a security response. So I do like this tool – as much as I want to make fun of it.”
11:54 Announcing AWS Blueprint for Ransomware Defense
- AWS announced AWS Blueprint for Ransomware Defense – available for both public and enterprise organizations, and can be customized to meet specific requirements.
- Blueprint is a comprehensive framework that helps orgs protect themselves against ransomware attacks – an ongoing and serious issue if you read the news, well, EVER.
- It includes best practices for security, compliance, and disaster recovery.
- Based on AWS Security services (obviously…)
- If you really need help falling asleep, there’s *29* pages of prescriptive guidance and lists of CIS controls. Because who doesn’t love lists of CIS controls? We sure do!
Does it interest anyone else that AWS is putting out all these announcements before Re:inforce coming up in June? It will be interesting to see what they release…
13:57📢Jonathan- “It’s nice to have a robust plan that your vendor also uses, because as I’m sure there are more and more high profile ransomware cases in the news vendor management questions are going to start including do you have a plan to deal with ransomware – and what is it? And the easy ‘well, this is what we use and Amazon uses the same thing’ is probably a huge time saver.”
GCP
15:27 Security Command Center (SCC) Premium Pricing Gets a 25% Reduction
- Google Cloud has introduced a 25% reduction in Security Command Center (SCC) Premium pricing for project-level activation.
- SCC is a comprehensive risk management and security platform offered by GCP.
- The premium tier offers more advanced features, such as security dashboards, anomaly detection, and integration with third party protocols.
- This cost reduction is applicable to customers using SCC to secure Compute Engine, GKE-Autopilot, App Engine, and Cloud SQL.
- The hope is that by making SCC premium more affordable (or affordable AT ALL) that more organizations will be able to use it to guard against threats.
15:56 📢 Ryan- “So I’m probably biased because of my personal experience with SCC, but it’s priced VERY very high, and it is very hard to roll it out at scale with it’s pricing model. So this, I feel, is a necessary move to make it competitive.”
Azure
18:31 Building a Quantum – Safe Future
- Quantum computers are still in the early stages of development, but they DO have the potential to break current encryption standards. (Jonathan is going to keep adding this to his prediction list until it comes true.)
- Microsoft is now working on developing quantum safe cryptography that would potentially be resistant to quantum computers.
- These kinds of updates and developments will be essential in protecting sensitive data.
- Microsoft is working with both private and public partners to develop new standards and get it deployed.
Quick reminder – Microsoft is also building the quantum computers that are going to crack the current encryption, so we have to make sure we have encryption that can beat the encryption beating machines. A “finger in both pies” situation. So that’s fun
20:20 📢 Matt- “But can you build a safety standard against something that doesn’t exist yet?”
20:25 📢 Ryan – “That’s the easiest safety standard to build, right?!”
20:40 📢 Jonathan – “It is a bit of a self-fulfilling prophecy about the whole thing though.”
21:58 Reflections on AI and the Future of Human Flourishing
- No, this isn’t the newest concept ride over at EPCOT. It’s the latest blog from Microsoft! A blog – about AI – from Microsoft? How new and interesting!
- Did you know that AI has the potential to be a powerful tool for good? We had no idea!
- But of course, “it is important to use AI responsibly” and be developed in a way that benefits all of humanity – not just a select few.
- The blog post talks about the need to be prepared for the negative consequences of AI, such as job displacement, and how Microsoft needs to have “a clear understanding” of the ethical implications of AI.
- It also discusses the need to include diverse voices and research when it comes to developing AI in the future.
25:33 📢 Ryan – “Localization for AI is gonna be a thing, right? We’re just not there yet. It is a very difficult challenge, labeling and machine learning – that’s been around for awhile and I still don’t know a really good solution, other than what people do – which is mechanical trick it out; pay a lot of people a little money to go and just do a subset. I imagine with localization, and we”ll see how good that turns out.”
29:55 📢 Jonathan- “I will say something for Open AI though; in themselves, I like that they’re not publicly owned. There aren’t shareholders to please. They’re not being pushed by investors to rush things out or monetize in a particular way.”
34:05 Microsoft Announced the Azure AI Content Safety Public Preview
- Now available in public preview, this new suite of AI tools is available to help companies protect their users from harmful content.
- Content Safety uses advanced AI to detect (and remove) offensive or inappropriate content in text and in images.
- The tools are currently available to all Azure users, and Microsoft is partnering to make them more widely available.
34:36 📢 Ryan – “If you were wondering why Microsoft felt the need to publish a blog post with a deep thought experiment about being responsible with AI and Microsoft’s responsibility, now you know! They also now offer a service in public preview where you can give them money!”
35:08 Matt’s article from 2017 Non-Profit Hackathon “We Saw. We Hacked. We Conquered”
38:17 Azure Load Balancer per VM limit has been removed
- All customers using the standard load balancer now have UNLIMITED power. Wait, no. Unlimited load balancers. Yeah, that’s it. You can now have as many load balancers per VM as you’d like, which is a pretty big increase from TWO (one public and one internal) – which used to be the limit.
38:56 📢 Matt- “I just want to know what caused it. Like, what was the technical limitation that was in place that caused this limit to have to occur?”
Either way – their announcement is now officially shorter than our notes about said announcement.
Oracle
Continuing our Cloud Journey Series Talks
40:53 Cloud Native MultiCloud
- I know it feels like we already talk about this all the time.
- 214 episodes, so the question is do we LIKE multicloud?
- Should it be your first choice? Probably not.
- Are there times where multicloud makes sense? Maybe. A really compelling service or you inherit someone else’s cloud – but otherwise there doesn’t really seem to be a good enough reason. Especially given the potential cost.
- Is it even POSSIBLE to do multi cloud correctly? It’s hard enough to do ONE cloud right.
- The best course of action is probably to choose the one that best fits your organization, and then just deal with its limitations – rather than trying to manage a multi cloud environment.
- Our opinion: there’s almost no reason anyone should voluntarily choose a multicloud situation.
- There’s an argument to be made that you actually lose time, money, and efficiency by losing out on some of the pros of your first environment – you lose the benefits of using the cloud.
- One of the important things to remember is that the cloud – by default – is NOT cheaper than on prem.
- Using the included services, and the correct tools, can definitely increase the value for money
- Issue where multi cloud might make sense: data center location (read: latency, downtimes, etc) but even then it really only applies to the “last mile” talking to devices or customers.
- Data sovereignty is going to be a key issue for regions and, in turn, multi cloud
- Issue where multi cloud is not ideal – vendor lock. Do you want to be locked into a relationship with one vendor or three?
- Does cost really influence moving from one cloud provider to another? Or is it just too complicated? If you’ve seen an entity actually completely move clouds over cost please let us know!
- Pricing is definitely prohibitive for multicloud, including compliance, the number of people required, etc.