Welcome to episode 277 of The Cloud Pod, where the forecast is always cloudy! Justin, Ryan, and Matthew are your hosts this week for a news packed show. This week we dive into the latest in cloud computing with announcements from Google’s new AI search tools, Meta’s open-sourced AI models, and Microsoft Copilot’s expanded capabilities. We’ve also got Oracle releases, and some non-liquid Java on the agenda (but also the liquid kind, too) and Class E IP addresses. Plus, be sure to stay tuned for the aftershow!
Titles we almost went with this week:
🦙Which cloud provider does not have llama 3.2
🆘Vmware says we will happily help you support your old Microsoft OS’s for $$$$
🌌Class E is the best kind of IP Space
🤖Microsoft says trust AI, and so does Skynet
🦙3.2 Llama’s walked into an AI bar…
🪪Google gets cranky about MS Licensing, join the club
✍️Write Your Prompts, Optimize them with Vertex Prompts Analyzer, rinse repeat into a
vortex of optimization
🖱️Oracle releases Java 23, Cloud Pod Uses Amazon Corretto 23 instead
🏃Oracle releases Java 23, Cloud Pod still says run! MK
A big thanks to this week’s sponsor: Archera
There are a lot of cloud cost management tools out there. But only Archera provides cloud commitment insurance. It sounds fancy but it’s really simple. Archera gives you the cost savings of a 1 or 3 year AWS Savings Plan with a commitment as short as 30 days. If you don’t use all the cloud resources you’ve committed to, they will literally put money back in your bank account to cover the difference. Other cost management tools may say they offer “commitment insurance”, but remember to ask: will you actually give me my money back? Archera will. Click this link to check them out
https://shortclick.link/uthdi1
AI Is Going Great – Or How ML Makes All It’s Money
01:06 OpenAI CTO Mira Murati, 2 other execs announce they’re leaving
- Listener Note: paywall article
- OpenAI Chief Technology Officer Mira Murati is leaving, and within hours, two more OpenAI executives joined the list of high-profile departures.
- Mira Murati spent 6.5 years at the company, and was named CEO temporarily when the board ousted co-founder Sam Altman.
- “It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally,” Altman wrote. “I feel tremendous gratitude towards her for what she has helped us build and accomplish, but most of all, I feel personal gratitude towards her for her support and love during all the hard times. I am excited for what she’ll do next.”
- Mira oversaw the development of ChatGPT and image generator Dall-E. She was also a pretty public face for the company, appearing in its videos and interviewing journalists.
- The other two departures were Barret Zoph, who was the company’s Vice President of Research and Chief Research officer Bob McGrew.
02:26 📢 Ryan – “Her reason for leaving is, you know, to take some time and space to explore and, you know, be more creative. I’m like, yeah, okay. they’re starting copy. Yeah. Yeah. Leaving for health reasons. You got fired.”
-Copywriter Note: this is 100% copywriter speak for you either got fired – or will be soon and decide to step down.
03:38 Llama 3.2: Revolutionizing edge AI and vision with open, customizable
- Meta is releasing Llama 3.2, which includes small and medium sized vision LLM’s (11B and 90B) and lightweight, text only models (1B and 3B) that fit on edge and mobile devices, including pre-trained and instruction tuned versions.
- The 1B and 3B models support context length of 128k tokens and are state of the art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge.
- The models are enabled on Qualcomm and MediaTek hardware, and optimized for ARM Processors.
- Llama 3.2 11B and 90B vision models are drop-in replacements for their text model equivalents, while exceeding on image understanding tasks compared to closed models, such as Claude 3 Haiku.
- Unlike other multi-modal models, both pre-trained and aligned models are available to be fine-tuned for custom applications using torchtune and deployed locally using torchchat.
- In addition, they are launching Llama Stack distributions, which greatly simplify the way developers work with Llama models in different environments from single node, on-prem, cloud and on device, enabling turnkey RAG and tooling-enabled applications with integrated safety.
- Models are available on Llama.com and Hugging Face and various partner platforms.
04:58 📢 Ryan – “I’m excited about the stack distributions just because it’s, you know, makes using these things a lot easier. I love the idea of having a turnkey rag and, you know, being able to sort of create that more dynamically without going too deep into AI and knowing how, you know, the sausage is made. And then, you know, the fact that they’re making models small enough to fit on edge and mobile devices is just great.”
07:06 Introducing Meta Llama 3.2 on Databricks: faster language models and
- Databricks now supports Meta Llama 3.2
AWS
07:35 Run your compute-intensive and general purpose workloads sustainably
with the new Amazon EC2 C8g, M8g instances
- Last week, we talked about the new C8g instances, but alongside those, Amazon has launched the Graviton 4-powered M8g instances with even more CPU and memory.
- M8g instances can have up to 192 VCPu, 768 GB of memory, 50 Gbps of network bandwidth, and 40 GB of EBS bandwidth.
- AWS Graviton 4 processors offer enhanced security with always-on encryption, dedicated caches for every vCPU and support for pointer authentication.
08:58 📢 Ryan – “I don’t know why you guys are more concerned about the headline because I was like, what is a sustainable workload when you’re talking about 192 vcpu and all the gobs of memory and you go through the entire blog post, they don’t mention it. They don’t mention anything about the power or the CO2 or anything. And so you’re just less to assume that because it’s Graviton, it’s more energy efficient. But I am claiming clickbait. I call bullshit.”
10:19 Introducing Llama 3.2 models from Meta in Amazon Bedrock: A new
generation of multimodal vision and lightweight models
- AWS gets Lama 3.2 90B & 11 B vision, 3B and 1B text only models in SageMaker.
- Woohoo.
28:31 Migrating from AWS App Mesh to Amazon ECS Service Connect
- AWS has decided to deprecate AWS App Mesh effective September 30th, 2026.
- Until this date, AWS App Mesh customers will be able to use the service as normal, including creating new resources and onboarding new accounts via the AWS CLI and AWS Cloudformation.
- However, new customers will no longer be able to onboard to AWS App Mesh starting on September 24th, 2024.
- This blog post walks you through the differences of the two solutions and how to migrate to the new solution. This is the way all deprecations should be done on AWS.
11:09 📢 Justin – “Thank you, Amazon, for writing a thorough blog post detailing how to get this done versus just silently canceling a service in the community post. I appreciate it.”
14:34 Switch your file share access from Amazon FSx File Gateway to Amazon
- While the use of App Mesh is a bit of a big deal, this one feels a bit more like a yawn to us.
- As of October 28th, 2024 new customers will no longer be able to deploy Amazon FSX File Gateways.
- FSX File Gateway is a type of AWS storage gateway, with local caching designed to be deployed on premises.
- FSX File gateway optimizes on-premise access to fully managed file shares in Amazon FSX for Windows FIle Server.
- With the drop in bandwidth costs and increasing availability, many clients can access FSX for Windows File Server in the cloud from their on-premise location without the need for a gateway or local cache.
- Those who still need a local cache will find that Amazon FSX for Netapp Ontap using FlexCache or Global File Cache can serve their needs.
15:49 📢 Matthew – “It’s more interesting that this is the first one they decided to kill off, not the other services that have been around. Because years ago when they first had all the storage gateway, there were like the three types they had. And obviously they had the fourth, but like they didn’t kill off any of the S3 ones that were related. If you’re talking about things like network latency and everything else, where blob storage is meant to kind of handle that, where Samba shares, SIF shares.”
GCP
17:54 Google files EU antitrust complaint against Microsoft over software
- Google has filed an antitrust complaint against Microsoft corp within the European commission.
- The move has to do with Windows Server. Per Google, a set of licensing terms that MS applied to the OS in 2019 harmed competition and raised costs for its customers.
- Under the revised usage terms, customers must pay additional fees if they wish to move their windows server licenses from Azure to rival platforms such as Google Cloud.
- Google claims that this can result in a 400% increase to run Windows on rival clouds.
- Google wasn’t done, complaining that companies that run windows servers on third party cloud platforms get limited access to security patches, compared to Azure users and the search giant argues there are other “interoperability barriers”
- This complaint comes two years after CISPE filed a similar complaint, but they withdrew it after reaching an agreement with Microsoft.
18:52 📢 Ryan – “The Microsoft press releases for this have been worded very differently in the sense of like, it’s features built into the Azure workloads. And so it’s like, while you say that, they’re not granting the ability to Windows servers to get security patches on other clouds. The reality is, it’s only because they have the workloads running in Azure that they can offer the enhanced security patches, or at least I presume that. I guess I don’t know that. But yeah, and then the Windows licensing, it’s a service. Your licensing fees are built into using this service. yeah, competitive advantage.”
20:11 BigQuery vector search now GA, setting the stage for a new class of
- BigQuery Vector Search is now generally available, enabling vector similarity search on BigQuery data.
- This functionality, also commonly referred to as approximate nearest-neighbor search, is the key to empowering numerous new data and AI use cases such as semantic search, similarity detection, and retrieval-augmented generation (RAG) with large language models.
- Initially announced in February, BigQuery vector search integrates generation, management and search of embeddings within the data platform to provide a serverless and integrated vector analytics solution for use cases such as anomaly detection, multi-modal search, product recommendations, drug discovery and more.
- In addition, IVF or Inverted File Index for BigQuery vector search is also GA, this index uses a k-means algorithm to cluster the vector data and combines it with an inverted row locator in a two-piece index in order to efficiently search similar embedding representations of your data. IVF includes several new enhancements:
- Improved scalability
- Managed index with guaranteed correctness
- Stored Columns
- Pre-filters
22:15 📢 Justin – “…so my experience so far with costing of AI things is that it’s not as expensive as people fear it is. If you’re building a foundational model, 100%, it’s expensive. need lots of Nvidia GPUs, you know, that kind of stuff. But, know, if you’re using like inference nodes and you’re doing, you know, you’re using an LLM to respond or using rag to augment, like it isn’t as expensive as you might think it is to do those things, at least at some scale. you know, not as much as you might fear.”
24:47 Google Cloud database news roundup, September 2024 edition
- Google summarizes a busy month of announcements for September 2024.
- Oracle Database GA in Google Cloud (see last week’s show)
- New Spanner Editions are now generally available across Standard, Enterprise and Enterprise Plus. (also last week)
- Cloud SQL has three new features that improve the cloud sql enterprise plus postgres and mysql capabilities
- Edition Upgrades for in place upgrades
- MySQL minor version upgrades
- Zonal (ie standalone) instances.
- Alloy DB now supports PostgreSQL 16 in preview
- Node-level metrics on Memorystore for Redis Clusters
- Memorystore for Valkey support
- And KNN Vector searches for Firestore as Generally available
- Busy month covered here at the cloud pod (didn’t talk about that because justin refuses to discuss Firestore.)
26:18 Announcing Public Preview of Vertex AI Prompt Optimizer
- Prompt design and engineering stands out as one of the most approachable methods to drive meaningful output from LLM.
- However, prompting large language models can feel like navigating a complex maze. You must experiment with various combinations of instructions and examples to achieve the desired output.
- Taking a prompt and moving it from one LLM to another is challenging because different language models behave differently. Simply reusing a prompt is ineffective, so users need an intelligent prompt optimizer to generate useful usps.
- To help solve this problem google is announcing Vertex AI Prompt Optimizer in public preview.
- Prompt optimizer makes it easy to optimize, handles versatile tasks and expanded support for multi-modal tasks, comprehensive evaluations and flexible and customizable.
- Built for data driven optimization and built for Gemini.
27:48📢 Ryan – “I feel like I’m ahead of my time because I have not retrained my brain. But what I have learned to do is just ask AI how I should ask it. then, so I feel like this is basically just service-flying my normal use case, which is like, hey, I want to do a thing. How do I ask you to do a thing? And then it asks itself much better than I would have.”
29:22 From millions to billions: Announcing vector search in Memorystore for
- Google is announcing vector search on both the Memorystore for Valkey and Memorystore for Redis Clusters.
- Combining ultra-low latency in-memory vector search with zero-downtime scalability and high performance vector search across millions or billions of vectors.
- Currently in preview, vector support for these Memorystore offerings mean you can now scale out your cluster by scaling out to 250 shards, storing billions of vectors in a single instance.
- Vector search with Redis can produce single-millisecond latency on over a billion vectors with greater than 99% recall.
29:57 📢 Justin – “I don’t know if I would say that Redis or Valkey is, you know, zero downtime, but sure, okay.”
31:53 Leveraging Class E IPv4 Address space to mitigate IPv4 exhaustion issues
- As most technologists know, we are rapidly running out of IPV4 space, and the number of applications and services hosted on GKE continues to grow consuming even more private Ipv4 address space.
- For many large organizations, the RFC 1918 address space is becoming increasingly scarce, leading to IP Address Exhaustion challenges that impact their applications at scale.
- Ipv6 solves this exact issue by providing more addresses but not all enterprises or applications are ready for IPv6 yet.
- Bringing Class E IPV4 address space (240.0.0.0/4) can address the challenges as you continue to grow.
- Class E addresses are reserved for future use, as noted in RFC5735 and RFC 1112, however, that doesn’t mean you can’t use them today in certain circumstances.
- This blog post goes into the details of how to do this, which I found pretty interesting.
- The following are some common objections or misconceptions about using Class E addresses:
- Class E addresses do not work with other Google services. This is not true. Google Cloud VPC includes class E addresses as part of its valid address ranges for IPV4. Further, many Google managed services can be accessed using private connectivity methods with Class E addresses.
- Using Class E addresses limits communicating with services outside Google (internet / Interconnect to on-prem/other cloud). Misleading. Given that Class E addresses are non-routable and not advertised over the internet or outside of Google Cloud, you can use NAT or IP masquerading to translate Class E addresses to public or private IPv4 addresses to reach destinations outside of Google Cloud. In addition,
- With the notable exception of Microsoft Windows, many operating systems now support Class E addresses.
- Many on-prem vendors (Cisco, Juniper, Arista) support routing Class E addresses for private DC use.
- Class E addresses have performance/scale limitations. This is not true. There is no performance difference for Class E addresses from other address ranges used in Google Cloud. Even with NAT/IP Masquerade, agents can scale to support a large number of connections without impacting performance.
- So while Class E addresses are reserved for future use, not routable over the Internet, and should not be advertised over the public Internet, you can use them for private use within Google Cloud VPCs, for both Compute Engine instances and Kubernetes pods/services in GKE.
- There are several benefits of leveraging the Class E address space:
- It’s very large, while RFC 1918 has 17.9 million addresses, Class E has 268.4 million addresses.
- Scalability and growth
- Efficient resource utilization
- Future-proofing
- There are sharp edges, though. Not all OSs will support Class E addressing, and networking equipment and software such as routers and firewalls need to be able to support Class E addresses. Transitioning from RFC 1918 to Class E requires careful planning and execution.
35:55 📢 Justin – “I did do a quick Google search, does Windows support Class E addresses? And no, it does not. Windows blocks Class E addresses and doesn’t allow them to be assigned to a NIC through DHCP. Apparently though, you can set one up in Azure as your VPC virtual network, but they say it will not work for your Windows boxes and it may have compatibility issues with your Linux boxes. Which, yeah, cool, cool, cool. But you know.”
37:47 Meta’s Llama 3.2 is now available on Google Cloud
- Meta Llama 3.2 is on Google Cloud in the Vertex AI Model Garden
- By using Llama 3.2 on Vertex AI, you can:
- Experiment with confidence: Explore Llama 3.2 capabilities through simple API calls and our comprehensive generative AI evaluation service within Vertex AI’s intuitive environment, without worrying about complex deployment processes.
- Tailor Llama 3.2 to your exact needs: Fine-tune the model using your own data to build bespoke solutions tailored to your unique needs.
- Ground your AI in truth: Make sure your AI outputs are reliable, relevant, and trustworthy with Vertex AI’s multiple options for grounding and RAG. For example, you can connect your models to enterprise systems, use Vertex AI Search for enterprise information retrieval, leverage Llama for generation, and more.
- Craft intelligent agents: Create and orchestrate agents powered by Llama 3.2, using Vertex AI’s comprehensive set of tools, including LangChain on Vertex AI. Integrate Llama 3.2 into your AI experiences with Genkit’s Vertex AI plugin.
- Deploy without overheads: Simplify deployment and scaling Llama 3.2 applications with flexible auto-scaling, pay-as-you-go pricing, and world-class infrastructure designed for AI.
- Operate within your enterprise guardrails: Deploy with confidence with not only support for Meta’s Llama Guard for the models, but also Google Cloud’s built-in security, privacy, and compliance measures. Moreover, enterprise controls, such as Vertex AI Model Garden’s new organization policy, provide the right access controls to make sure only approved models are accessed by users.
38:36 Migrate your SQL Server databases using Database Migration Service, now
- DMS For SQL Server Databases is now Generally Available.
- Database migrations are often challenging and require scarce expertise.
- Database Migration Service has a unique approach to SQL Server database migrations:
- Minimal Downtime and System Overhead
- Serverless Simplicity
- Security at the forefront
- No additional charge
39:13 📢 Ryan – “I like the service. I really just wish it would work server to server in the Cloud, but, cause then I could use it…It just, it doesn’t because they restricted it so that you have to define your endpoint as a Cloud SQL box.”
Azure
40:20 Developer insights: Building resilient end-to-end security
- This is the first in a new series that will be on the Azure Blog on their end-to-end approach to cybersecurity.
- The purpose of this series is to highlight how Microsoft Security is transforming security platforms with practical, end-to-end security solutions for developers.
- It’s a lot of fluffy overview in this first in the series, but we’ll keep an eye on it as it evolves to see what else Microsoft reveals. You’re welcome.
- Unless you’re not familiar with a platform approach to security, then you should check it out in our show notes.
41:22 📢 Matthew – “I think it’s a good start to try to get people to think about security day one. There’s so many people think about security. They were ready to go to production. wait, this thing has to be. So doc comply or GDPR, whatever it is, you know, so I feel like it’s a good way to try to get developers to think security at the beginning versus security at the end. And if I have to say shift left, I might vomit a little.”
42:39 Run VCF private clouds in Azure VMware Solution with support for portable
- For those of you who are paying for VMware cloud foundation bundles from Broadcom Vmware, you can now port those subscriptions to Microsoft’s Azure VMware Solution (AVS) in a fast and easy way using familiar VMWare tools and skills.
- If you don’t have a VCF subscription, but want to take advantage of VCF and AVS you can buy your solution from Microsoft directly.
- This may be a benefit for you as it includes the fully managed and maintained cloud and vmware infrastructure.
- The VMWare Cloud Foundation stack which includes vSphere, vSAN, NSX and HCX as well as VCF Operations and VCF Automation (formerly the Aria Suite)
- You also get extended security updates for Windows Server 2012 and SQL Server 2012 and 2014.
43:53 Microsoft Trustworthy AI: Unlocking human potential starts with trust
- Microsoft is focused on helping customers use and build AI that is trustworthy, meaning that it is secure, safe and private.
- Security is the top priority, and their expanded Secure Future Initiatives underscore the company’s commitment and responsibility to make customers more secure.
- To enhance security with AI, they are launching Evaluations in Azure AI Studio to support proactive risk assessments.
- Microsoft 365 Copilot will provide transparency into web queries to help admins and users better understand how web search enhances the Copilot response.
- In terms of Safety, they have several new features to ensure that the AI is safe and several new capabilities to mitigate risks.
- Correction capability in Azure AI Content Safety Groundedness detection feature that helps fix hallucination issues in real-time before users see them.
- Embedded content safety allows customers to embed Azure AI content safety on devices. This is important for on-device scenarios where connectivity could be unavailable or intermittent.
- New evaluations in Azure AI studio to help customers assess the quality and relevancy of outputs and how often their AI application outputs protected material
- Protected material detection for code is now in preview in Azure AI content safety to help detect pre-existing content and code. This feature helps developers explore public source code in GitHub repos, fostering collaboration and transparency while enabling more informed coding decisions.
- And finally, in privacy, they are announcing:
- Confidential inference in preview in the Azure OpenAI service whisper model, so customers can develop generative AI applications that support verifiable end-to-end privacy.
- General Availability of Confidential VMs with NVIDIA h100 tensor core GPU.
- Azure Open Data Zones for the EU and US are coming soon and build on existing data residency provided by the Azure OpenAI service by making it easier to manage the data processing and storage of generative AI applications. This new functionality offers customers the flexibility of scaling generative AI applications across all Azure regions with a geography while giving them control of data processing and storage with the EU or US.
45:55 📢 Ryan – “That’s an interesting wrinkle that I hadn’t thought of before. You know, the computation of these AI models and having that all be within specific regions for, I guess, GDPR reasons.”
Oracle
47:51 Oracle Releases Java 23
- Oracle is launching Java 23.
- We still don’t know how we got from 8 to 23, but here we are.
- Java 23 is supported by the recent GA of Java Management Service 9.0, an OCI Native Service that provides a unified console to help organizations manage Java runtimes and applications on-premise or in the cloud.
- JSM 9 includes usability improvements and JDK 23 provides more options for fine-tune and improve peak performance with the addition of the Graal compiler, a dynamic just-in-time compilation written in Java that transforms bytecode into optimized machine code.
48:40 📢 Justin – “…if you’re paying Oracle’s ridiculous Java fees and not using Coretto or any of the other numerous Java ports that have happened, you can get this from Oracle for Java 23.”
51:06 Oracle’s stock pops on strong earnings beat, driven by cloud growth and
- Oracle’s recent quarter was good with earnings per share of $1.39 vs the target of 1.32.
- Revenue for the quarter rose 8% from a year before, to 13.31 billion, better than wall street estimates. Net income rose to 2.93 B up from 2.42 billion in the same period.
- Cloud service and license support revenue rose 10% from a year earlier to 10.52 billion. Whereas cloud infrastructure grew 45% to 2.2 billion up from 2.42 billion in the same period a year earlier.
- Catz said that demand is outstripping supply and she is ok with that.
51:40 📢 Justin – “I don’t really understand if cloud service and licensing is like Oracle licensing and cloud OCI revenue shoved together. And then they also break out cloud infrastructure into its own number, but like 2.2 billion is not a lot of money for a cloud.”
52:48 Announcing General Availability of OCI Compute with AMD MI300X GPUs
- OCI is announcing the GA of bare metal instances with the AMD Instinct MI300X GPU.
- OCI Supercluster with AMD instinct MI300x accelerators provide high-throughput, ultra-low latency RDMA cluster network architecture for up to 16,384 MI300X GPUs.
- A single instance will be 6.00 per hour, and include 8 AMD Instinct Mi300X accelerator. 1.5TB of memory, Intel Sapphire Rapids CPU, and 2TB of DDR 5 memory, and 8×3.84 TB NVME drives with frontend network support 100G.
53:35 📢 Matthew- “I still say you’re dong the cloud wrong.”
Aftershow
54:48 System Initiative is the Future
- Adam Jacobs has announced his new startup, System Initiative. Jacobs is a well-known DevOps founder who was one of the engineers behind Chef.
- Revolutionary DevOps Technology: System Initiative is introduced as a game-changing DevOps automation tool. It offers a fresh approach that addresses long-standing industry issues, such as slow feedback loops and complex infrastructure challenges.
- Building What You Believe In: The founder emphasizes the importance of building products you are passionate about. This project is the culmination of five years of work, but feels like the culmination of a career in DevOps tools.
- The Problem with Infrastructure as Code: While functional, infrastructure as code is limited. It locks systems in static representations of dynamic environments, causing inefficiencies. The founder believes the industry is stuck and needs new solutions.
- Digital Twins & Simulation: A key innovation in System Initiative is using 1:1 digital twins of cloud infrastructure, decoupling real and hypothetical states. This solves the feedback loop problem by simulating infrastructure changes without deploying them.
- 200% Problem Solved: System Initiative simplifies automation by eliminating the need to understand the underlying domain and the tool itself. Its digital twins offer a 1:1 translation with no loss of fidelity.
- Complexity in DevOps: The founder reflects on working with major enterprises and the complexity inherent in all infrastructure. System Initiative embraces this complexity with a platform designed to be powerful, flexible, and expressive.
- Reactive Programming for Flexibility: System Initiative’s infrastructure is based on a reactive graph of functions, making it easier to create, modify, and automate complex environments dynamically.
- Multiplayer Collaboration: System Initiative enables real-time collaboration, allowing multiple users to work on the same infrastructure and see changes instantly. This drastically improves communication and productivity in DevOps teams.
- Open Source & Community Focus: The project is 100% open source, inviting contributions and fostering a collaborative community to build and extend the platform.
- Future of DevOps Automation: The System Initiative aims to replace Infrastructure as Code today and transform how teams work together in complex environments in the future. It’s presented as the next step in the evolution of DevOps.
- These points should frame your conversation around the key innovations, the philosophical drive behind the project, and the technology’s transformative potential.
Closing
And that is the week in the cloud! Visit our website, the home of the Cloud Pod where you can join our newsletter, slack team, send feedback or ask questions at theCloud Pod.net or tweet at us with hashtag #theCloudPod