Software Development

Explore top LinkedIn content from expert professionals.

  • View profile for Geoff Hancock CISO CISSP, CISA, CEH, CRISC

    As a CISO (multiple times) and CEO I help business and technology executives enhance their leadership, master cyber operations, and bridge cybersecurity with business strategy.

    8,569 followers

    A Quick Plan/Approach For CISO’s to Address AI Fast. As a CISO/CEO you have to stay on top of new ideas, risks and opportunities to grow and protect the business. As we all keep hearing and seeing LLM/AI usage is increasing every day. This past week my inbox is full of one question How do I actually protect my company's data when using AI tools? Over the last 9 years I have been working on, involved with and creating LLM/AI cyber and business programs and as a CISO I have been slowly integrating ideas about AI/cyber operations, data protection and business. Here are five AI privacy practices that I have found that really work. I recommend to clients, partners and peers. I group them into three clear areas: Mindset, Mechanics, and Maintenance. 1. Mindset: Build AI Privacy Into the Culture Privacy isn't just a checklist, it's a behavior. Practice #1: Treat AI like a junior employee with no NDA. Before you drop anything into ChatGPT, Copilot, or any other AI tool, stop and ask: Would I tell this to a freelancer I just hired five minutes ago? That's about the level of control you have once your data is in a cloud-based AI system. This simple mental filter keeps teams from oversharing sensitive client or company info. Practice #2: Train people before they use the tool, not after. Too many companies slap a "responsible AI use" policy into the employee handbook and call it a day. That's no good. Instead, run short, focused training on how to use AI responsibly specially around data privacy. 2. Mechanics: Make Privacy Part of the System Practice #3: Use privacy-friendly AI tools or self-host when possible. Do your research. For highly sensitive work, explore open-source LLMs or self-hosted solutions like private GPTs or on-prem language models. It's a heavier lift but you control the environment. Practice #4: Classify your data before using AI. Have a clear, documented data classification policy. Label what's confidential, internal, public, or restricted, and give guidance on what can and can't be included in AI tools. Some organizations embed DLP tools into browser extensions or email clients to prevent slip-ups. 3. Maintenance: Keep It Tight Over Time Practice #5: Audit AI usage regularly. People get busy. Policies get ignored. That's why you need a regular cadence quarterly is a good place to start where you review logs, audit prompts and check who's using what. AI is evolving fast, and privacy expectations are only getting tighter. What other ways are you using LLM/AI in your organization? 

  • View profile for Dylan Davis

    I help mid-size teams with AI automation | Save time, cut costs, boost revenue | No-fluff tips that work

    4,663 followers

    Last week I spent 6 hours debugging with AI. Then I tried this approach and fixed it in 10 minutes The Dark Room Problem: AI is like a person trying to find an exit in complete darkness. Without visibility, it's just guessing at solutions. Each failed attempt teaches us nothing new. The solution? Strategic debug statements. Here's exactly how: 1. The Visibility Approach - Insert logging checkpoints throughout the code - Illuminate exactly where things go wrong - Transform random guesses into guided solutions 2. Two Ways to Implement: Method #1: The Automated Fix - Open your Cursor AI's .cursorrules file - Add: "ALWAYS insert debug statements if an error keeps recurring" - Let the AI automatically illuminate the path Method #2: The Manual Approach - Explicitly request debug statements from AI - Guide it to critical failure points - Maintain precise control over the debugging process Pro tip: Combine both methods for best results. Why use both?  Rules files lose effectiveness in longer conversations.  The manual approach gives you backup when that happens.  Double the visibility, double the success. Remember: You wouldn't search a dark room with your eyes closed. Don't let your AI debug that way either. — Enjoyed this? 2 quick things: - Follow along for more - Share with 2 teammates who need this P.S. The best insights go straight to your inbox (link in bio)

  • View profile for Chandrasekar Srinivasan

    Engineering and AI Leader at Microsoft

    42,078 followers

    About five years ago, I had a junior engineer on my team who was brilliant but struggled so much he was about to have a low performance review. Let’s call him Anthony He was fresh out of college and eager to prove himself, but his code reviews often came back with extensive feedback. The root of the issue wasn’t his intelligence or effort it was his approach. Anthony had this habit of jumping straight into the deep end. He wanted his code to be optimized, elegant, and perfect from day one. But in that pursuit, he often got stuck either over-engineering a solution or ending up with something too complex to debug. Deadlines were slipping, and his confidence was taking a hit. One day, during a particularly rough code review, I pulled him aside and shared a principle that had profoundly shaped my own career: “Make it work, make it right, make it fast.” I explained it like this: 1. Make it work – First, solve the problem. Forget about how pretty or efficient your code is. Focus on meeting the acceptance criteria. If it doesn’t work, nothing else matters. 2. Make it right – Once it works, step back. Refactor the code, and make it clean, modular, and maintainable. Code is for humans who’ll work with it in the future. 3. Make it fast – Finally, if performance is critical, optimize. But don’t sacrifice clarity or maintainability for marginal speed gains. The next sprint, he followed this approach on a tricky API integration task. When we reviewed his work, the difference was night and day. Not only had he delivered on time, but the code was a joy to read. Even he admitted it was the least stressful sprint he’d had in months. Six months later, Anthony came to me and said, “That principle you shared, it’s changed everything. Thank you for pulling me aside that day.” Today, Anthony is a senior engineer leading his team, mentoring others, and applying the same principle that once helped him. We’re still on good terms though he moved to another org. Sometimes, the most impactful advice is the simplest. As engineers, we often get caught up in trying to do everything perfectly all at once But stepping back and breaking it into manageable steps can make all the difference.

  • View profile for Timothy Goebel

    AI Solutions Architect | Computer Vision & Edge AI Visionary | Building Next-Gen Tech with GENAI | Strategic Leader | Public Speaker

    16,977 followers

    𝘛𝘩𝘪𝘴 𝘸𝘢𝘴 𝘴𝘰𝘮𝘦𝘵𝘩𝘪𝘯𝘨 𝘐’𝘷𝘦 𝘣𝘦𝘦𝘯 𝘱𝘶𝘵𝘵𝘪𝘯𝘨 𝘵𝘰𝘨𝘦𝘵𝘩𝘦𝘳 𝘵𝘩𝘪𝘴 𝘸𝘦𝘦𝘬. 𝐍𝐨𝐭 𝐚𝐥𝐥 𝐀𝐈 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞𝐬 𝐚𝐫𝐞 𝐜𝐫𝐞𝐚𝐭𝐞𝐝 𝐞𝐪𝐮𝐚𝐥. Here’s how I integrate Microsoft Azure services to create AI that works for just about any business not the other way around. Want to know the secret sauce? 👇 7 Lessons from Building Scalable AI Solutions Customers Love: 𝐒𝐭𝐚𝐫𝐭 𝐰𝐢𝐭𝐡 𝐜𝐥𝐞𝐚𝐧 𝐝𝐚𝐭𝐚. ↳ Use 𝐀𝐳𝐮𝐫𝐞 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭 𝐀𝐧𝐚𝐥𝐲𝐳𝐞𝐫 for structured ingestion. ↳ Automate preprocessing with 𝐀𝐳𝐮𝐫𝐞 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧 𝐀𝐩𝐩𝐬. ↳ Store data securely in 𝐀𝐳𝐮𝐫𝐞 𝐁𝐥𝐨𝐛 𝐒𝐭𝐨𝐫𝐚𝐠𝐞. 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐟𝐞𝐚𝐭𝐮𝐫𝐞𝐬 𝐜𝐮𝐬𝐭𝐨𝐦𝐞𝐫𝐬 𝐯𝐚𝐥𝐮𝐞. ↳ Focus on actionable insights, not noise. ↳ Leverage 𝐀𝐳𝐮𝐫𝐞 𝐌𝐚𝐜𝐡𝐢𝐧𝐞 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 for advanced prep. ↳ Collaborate with end users for relevance. 𝐓𝐫𝐚𝐢𝐧 𝐦𝐨𝐝𝐞𝐥𝐬 𝐭𝐡𝐚𝐭 𝐚𝐥𝐢𝐠𝐧 𝐰𝐢𝐭𝐡 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐠𝐨𝐚𝐥𝐬. ↳ Test multiple architectures, like custom LLMs. ↳ Use 𝐀𝐳𝐮𝐫𝐞 𝐌𝐋 and Azure OpenAI to streamline experimentation. ↳ Optimize for speed and scalability. 𝐃𝐞𝐩𝐥𝐨𝐲 𝐰𝐢𝐭𝐡𝐨𝐮𝐭 𝐝𝐢𝐬𝐫𝐮𝐩𝐭𝐢𝐧𝐠 𝐰𝐨𝐫𝐤𝐟𝐥𝐨𝐰𝐬. ↳ Host on 𝐀𝐳𝐮𝐫𝐞 𝐊𝐮𝐛𝐞𝐫𝐧𝐞𝐭𝐞𝐬 for reliability. ↳ Use 𝐀𝐳𝐮𝐫𝐞 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 for seamless integration. ↳ Monitor deployment with feedback loops. 𝐌𝐚𝐤𝐞 𝐝𝐚𝐭𝐚 𝐫𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐛𝐥𝐞 𝐚𝐧𝐝 𝐚𝐜𝐭𝐢𝐨𝐧𝐚𝐛𝐥𝐞. ↳ Index with 𝐀𝐳𝐮𝐫𝐞 𝐂𝐨𝐠𝐧𝐢𝐭𝐢𝐯𝐞 Search. ↳ Store outputs in 𝐂𝐨𝐬𝐦𝐨𝐬 𝐃𝐁 for scalability. ↳ Ensure query optimization for real-time use. 𝐁𝐫𝐢𝐝𝐠𝐞 𝐀𝐈 𝐰𝐢𝐭𝐡 𝐛𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐥𝐨𝐠𝐢𝐜. ↳ Use 𝐀𝐳𝐮𝐫𝐞 𝐅𝐮𝐧𝐜𝐭𝐢𝐨𝐧𝐬 to support decisions. ↳ Automate workflows for better efficiency. ↳ Integrate insights directly into operations. 𝐆𝐨𝐯𝐞𝐫𝐧 𝐰𝐢𝐭𝐡 𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐚𝐧𝐝 𝐚𝐠𝐢𝐥𝐢𝐭𝐲 𝐢𝐧 𝐦𝐢𝐧𝐝. ↳ Use 𝐆𝐢𝐭 𝐅𝐥𝐨𝐰 for version control. ↳ Secure pipelines with 𝐂𝐡𝐞𝐜𝐤𝐦𝐚𝐫𝐱. ↳ Automate infrastructure with 𝐓𝐞𝐫𝐫𝐚𝐟𝐨𝐫𝐦. Which step will move your business forward today? ♻️ Repost to your LinkedIn followers and follow Timothy Goebel for more actionable insights on AI and innovation. #ArtificialIntelligence #AzureCloud #InnovationInTech #AITransformation #MachineLearningPipeline

  • View profile for Bryan Ansong

    engineering @ meta

    4,857 followers

    🛠️ Journey to Assessments.lol Launch - Dev Log #2 Tech Stack Deep Dive After much research and consideration, I've locked in the tech stack for assessments.lol. Here's why each piece matters: 📱 Frontend: Next.js + TypeScript • Why? Next.js has been really hyped lately and I wanted to take this as an opportunity to learn it. It also has a really great development experience and does a lot of heavy lifting in stuff like page navigation/routing. • TypeScript catches bugs before they happen (TypeScript is almost a no brainer in these days, we want that extra type safety!) 🔐 Backend: Supabase • PostgreSQL database with real-time capabilities • Built-in authentication that took 30 minutes to set up instead of weeks • Row Level Security means each user only sees what they're supposed to 🎨 Styling: Tailwind CSS + Daisy UI + Shadcn UI • Consistent design system out of the box • No more fighting with CSS specificity issues ☁️ Deployment: Vercel + Cloudflare • Zero-config deployments • Edge functions for speed • DDoS protection included This entire stack is free to start with and scales beautifully as we grow. We can handle thousands of users without touching the infrastructure. 🔑Takeaway: Pick a tech stack that solves your core problems out of the box. ----------------------------------------- For those new to my Dev Logs, assessments.lol is a platform for sharing anonymous, crowdsourced data about technical assessments at top companies. I’m sharing my thoughts and progress publicly to keep me accountable and this is one of the Dev Logs of the journey. I’m learning a lot along the way so if you have any valuable input please share them in the comments! Join the waitlist: https://lnkd.in/emxD58bM Stay tuned for the next dev log! 🌟 #BuildInPublic

  • View profile for Mukund Mohan

    Marketing Head iCustomer │ Private Equity - Vangal │ Amazon, Microsoft, Cisco, and HP │ Achieved 2 startup exits: 1 acquisition and 1 IPO.

    30,299 followers

    Recently helped a client cut their AI development time by 40%. Here’s the exact process we followed to streamline their workflows. Step 1: Optimized model selection using a Pareto Frontier. We built a custom Pareto Frontier to balance accuracy and compute costs across multiple models. This allowed us to select models that were not only accurate but also computationally efficient, reducing training times by 25%. Step 2: Implemented data versioning with DVC. By introducing Data Version Control (DVC), we ensured consistent data pipelines and reproducibility. This eliminated data drift issues, enabling faster iteration and minimizing rollback times during model tuning. Step 3: Deployed a microservices architecture with Kubernetes. We containerized AI services and deployed them using Kubernetes, enabling auto-scaling and fault tolerance. This architecture allowed for parallel processing of tasks, significantly reducing the time spent on inference workloads. The result? A 40% reduction in development time, along with a 30% increase in overall model performance. Why does this matter? Because in AI, every second counts. Streamlining workflows isn’t just about speed—it’s about delivering superior results faster. If your AI projects are hitting bottlenecks, ask yourself: Are you leveraging the right tools and architectures to optimize both speed and performance?

  • View profile for David Odeleye

    AI Project Management Specialist | Leading AI with the mind of a strategist and the heart of a leader || IT Project Manager | AI Evangelist for Tech Leaders | LinkedIn Creator

    9,884 followers

    When I first started managing remote projects, I thought keeping everyone aligned would just require the right tools and regular check-ins. I quickly learned it’s so much more than that. Through trial and error, I found what really works. Most importantly, I learned that managing projects remotely is built on one thing: Trust. Here’s what helped me keep my teams aligned and hitting deadlines: 1. Set Clear Expectations ✅ Define roles and goals early. ↳ Ambiguity breeds confusion. I make it a point to set crystal-clear expectations from day one. 2.Use the Right Tools ✅ Hold regular video meetings ↳ I used to think we could skip face time, but I quickly learned that personal connection keeps morale strong. 3. Prioritize Strong Communication ✅ Stand-ups keep my team aligned, but more importantly, they foster accountability. 4. Focus on Outcomes, Not Hours ✅ I learned to measure success by outcomes instead of hours.  ↳ Trusting my team to deliver results made them more motivated and productive. ✅ Celebrating milestones is the fuel that keeps everyone going. 5. Build a Strong Team Culture ✅ Scheduling virtual team-building activities has helped create a community. ↳ This sense of belonging is what ultimately pushes us to succeed together. Managing remote teams is about building trust & creating clear goals.

  • View profile for Sujeeth Reddy P.

    Software Engineering

    7,738 followers

    In the last 11 years of my career, I’ve participated in code reviews almost daily. I’ve sat through 100s of review sessions with seniors and colleagues. Here’s how to make your code reviews smoother, faster and easier: 1. Start with Small, Clear Commits    - Break your changes into logical, manageable chunks. This makes it easier for reviewers to focus and catch errors quickly. 2. Write Detailed PR Descriptions    - Always explain the “why” behind the changes. This provides context and helps reviewers understand your thought process. 3. Self-Review Before Submitting    - Take the time to review your own code before submitting. You'll catch a lot of your own mistakes and improve your review quality. 4. Ask for Specific Feedback    - Don’t just ask for a “review”—be specific. Ask for feedback on logic, structure, or potential edge cases. 5. Don’t Take Feedback Personally    - Code reviews are about improving the code, not critiquing the coder. Be open to constructive criticism and use it to grow. 6. Prioritize Readability Over Cleverness    - Write code that’s easy to understand, even if it’s less “fancy.” Simple, clear code is easier to maintain and review. 7. Focus on the Big Picture    - While reviewing, look at how changes fit into the overall system, not just the lines of code. Think about long-term maintainability. 8. Encourage Dialogue    - Reviews shouldn’t be a one-way street. Engage in discussions and collaborate with reviewers to find the best solution. 9. Be Explicit About Non-Blocking Comments    - Mark minor suggestions as “nitpicks” to avoid confusion. This ensures critical issues get addressed first. 10. Balance Praise and Criticism    - Acknowledge well-written code while offering suggestions for improvement. Positive feedback encourages better work. 11. Always Follow Up    - If you request changes or leave feedback, follow up to make sure the feedback is understood and implemented properly. It shows you’re invested in the process. -- P.S: What would you add from your experience?

  • View profile for Michael Ricordeau

    Founder & CTO at Plivo

    3,554 followers

    Tech debt can bring down even the brightest of startups. Here are three ways we reversed ours at Plivo: 1. We restructured our org chart. Initially we had three specialist teams: backend engineering, infra (CloudOps/SRE/DevOps today), and VoIP engineering. As a CPaaS company we have a complex voice-calling API stack. This worked well when Plivo had 10 employees, but by 2016, we had 40 and multiple products, which led to constant context switching and the exacerbation of our tech debt. To solve this, we switched to a new product-based team structure. The goal of this new structure was to reduce context switching and ensure that each employee could focus on what they were best at. We built teams for our voice API, SMS API, Billing/Payments, and SDKs, and each team had its own developers, QAs, engineering managers, and product owners. When we switched to this new structure we began to notice improvements across the board: bugs were getting fixed faster, latency and uptime were improved, teams had the bandwidth to break down monoliths, and our products were getting better and faster. 2. We built infrastructure for scale. In the early days of Plivo, we were deploying a new AWS EC2 image with each release instead of incremental code upgrades. This gave us the flexibility to access and re-deploy older images, but the process was quite slow and hurt our productivity. It resulted in the rapid accumulation of tech debt because we could not iterate fast. We didn’t have a staging environment either. Services would crash and we had no logic to switch so we created a dedicated AWS account for our staging setup. This dual-account setup was sufficient for a while but when we switched to product-based teams, we chose to replicate this structure for each product within Plivo. This increased our overhead substantially but also gave us much more stability. The addition of QA engineers for each product reduced surprises further. In 2018, we integrated all our operations into Docker, significantly transforming Plivo's development process. We were able to set up a robust CI/CD pipeline with added layers of security, compliance, and immutability. Consequently, our engineers could develop on local setups and accelerate deployment times. 3. We made customer experience our north star metric. As a tech person, I tend to prioritize tech metrics, e.g., our p99 latency. However, during the five months we spent reversing our tech debt, we decided to prioritize the one thing that would improve customer experience the most at any given moment. To determine what that one thing was, we paid attention to the number of incidents, support tickets, and alerts. Rome was ablaze, but we still needed to pick and choose which specific buildings to throw buckets of water at—these metrics informed our firefighting efforts. Our battle against tech debt is ongoing, but we now feel well-equipped to fight it thanks to our scalable org structure, CI/CD pipeline, and customer-centricity!

  • I’ve successfully managed remote teams for 20 years, without micro-managing It's a lot simpler than most people think. Here’s how I do it 👇 I started managing offshore development teams at GE in 2004 Now, my entire team is remote Managing remote teams can be tricky Especially if everyone on the team is performing at different levels One tactic that’s helped me the most. Creating habits! Tiny habits lead to big results. But in a remote world, how do you know everyone practices good habit hygiene? Here’s my system. 1. Set Clear Goals for Everyone ⮑ Make sure each team member knows their targets. ⮑ This helps them stay focused and productive. 2. Use Activity Logs Wisely ⮑ Ask for daily or weekly logs that highlight key tasks completed. ⮑ This provides insight without being invasive. 3. Encourage 15-min Regular Check-ins ⮑ Schedule brief, consistent meetings to discuss progress. ⮑ These touchpoints keep everyone aligned and accountable. 4. Embrace Collaborative Tools ⮑ Use tools like Slack, Gong, Hubspot to track activity. ⮑ This keeps everyone in the loop and eases communication. 5. Celebrate Small Wins ⮑ Acknowledge milestones and achievements regularly. ⮑ This boosts morale and keeps the team motivated. 6. Offer Constructive Feedback ⮑ Provide timely and specific feedback on work completed. ⮑ This helps team members improve and stay on track. 7. Foster a Culture of Trust ⮑ Build trust by being transparent and supportive. ⮑ This creates a positive work environment where everyone thrives. Each week at Miva I hold: -15-minute weekly 1x1s w/ my direct reports -30-minute functional team meetings w/ each GTM function -30-minute GTM all-hands on Friday. During our GTM all-hands, we discuss our activity goals and how we did. We also share learnings and ideas on how to improve. When we do the right reps the results take care of themselves.