This is part two of Future Trends in Pentesting. Join Darrius and Spencer on The Cyber Threat Perspective as they explore the future trends in penetration testing, covering the impact of AI tools, language models, and the ongoing debate about public release of information and tools.
Future Trends in Penetration Testing
In this section, Darius and Spencer discuss the importance of staying updated on future trends in the penetration testing industry.
Importance of Staying Updated
- The industry is constantly changing with new techniques, threats, exploits, and vulnerabilities.
- Staying on top of these changes helps professionals respond effectively during engagements and provide knowledgeable insights to clients.
- Continuous improvement and adaptation are necessary to keep pace with evolving technology and industry standards.
- Falling behind can result in being outdated and ineffective in addressing new challenges.
Increase in AI Language Models
- The use of AI language models, such as GPTs (Generative Pre-trained Transformers), is becoming more prevalent in penetration testing.
- These models have potential applications for improving the pen testing process by assisting with information analysis, prioritizing attack paths, and enhancing understanding of targets.
- While current applications may not be groundbreaking, there is anticipation for further advancements that will shape the future of pen testing workflows.
Conclusion
Staying updated on future trends is crucial for effective penetration testing. It allows professionals to adapt to new techniques, threats, and technologies while continuously improving their skills and knowledge. The increasing use of AI language models presents opportunities for enhancing the pen testing process but requires further development to fully realize its potential impact.
AI and Soft Skills
The speaker discusses how AI can be used to enhance soft skills in the industry, particularly for individuals who may struggle with interpersonal communication. They mention the advantages of using language models to format notes into a digestible format for clients or target audiences.
AI’s Impact on Soft Skills
- AI can assist individuals who are technically skilled but lack proficiency in soft skills.
- Language models can be used to convert written notes into a more understandable format for clients or target audiences.
- Current AI tools have limitations in handling real-world and advanced scenarios.
- Beginner or junior pentesters can benefit from using AI tools to gain a better understanding of their work.
The Importance of Experimenting with New Technologies
The speaker emphasizes the importance of exploring and experimenting with various technologies and tools in the field of pentesting. They discuss the ongoing debate about the best platforms and tools for pentesting, highlighting the need for diverse experience.
Exploring Different Technologies
- It is crucial to have experience with different technologies and tools in pentesting.
- There is an ongoing debate about the best platform (e.g., Kali, Windows, Parrot, Linux) and tools (e.g., CrackMapExec, CTooling) for pentesting.
- Being a first mover and gaining early experience with new technologies can provide an advantage in the industry.
- Developing skills now will be beneficial as AI becomes increasingly prominent in the field.
Language Models and Pretexting
The speaker discusses how language models can be utilized for pretexting purposes. They also touch upon future trends involving video manipulation, voice translation, and challenges related to identifying authentic content.
Language Models for Pretexting
- Applications utilizing language models allow users to generate summaries or text in different tones or styles.
- Custom language models can assist in formatting scratch notes into a more organized format for reports.
- Pretexting, phishing, and other techniques are expected to become more powerful with advancements in AI, including video and voice manipulation.
- The speaker expresses concerns about the authenticity of images and videos due to advanced technology, leading to the need for watermarking or other methods of verification.
Challenges with Authenticity and AI Detection
The speaker discusses the challenges associated with detecting AI-generated content and emphasizes that relying on AI alone may not be sufficient. They mention potential strategies such as watermarking or incorporating identifying marks within content.
Detecting Authenticity and AI
- Deepfake videos pose challenges in determining their authenticity using AI detection methods.
- Two AI models training each other can complicate the process of identifying deepfakes.
- Content creators may need to subtly watermark their videos or include identifying marks to prove authenticity.
- Struggles related to verifying original content will continue as technology advances.
The Impact of AI Model Training on Data Pollution
In this section, the speaker discusses the potential threat of data pollution in AI models and how they are trained using other models or data produced by other models. They emphasize the importance of well-trained data and the need for caution in developing local models.
AI Models Trained on Other Models or Data
- AI models are often trained using open-source or pre-existing models, such as OpenAI’s model.
- There is a chain of data being trained and influenced by other models, leading to potential model pollution or data pollution.
- The effectiveness of an AI model depends on the quality of the training data and how well it is trained.
Rise of DIY Local Models
- Developing local models and selling them could become a billion-dollar industry.
- Organizations or individuals may create tools for others to download and train their own local models.
- Security concerns arise with the proliferation of DIY local models, requiring caution from a security standpoint.
Llama and Facebook’s Influence
- Examples like Llama and Facebook allowing users to generate their own local models demonstrate the increasing trend towards individual model development.
- Similar to other technologies, there is a cycle where new technology becomes centralized, then decentralized as people develop their own solutions.
Automated Continuous Pen Testing
This section focuses on automated continuous pen testing as an emerging trend. The speaker shares thoughts on its potential benefits but also highlights limitations compared to manual pen testing.
Increase in Automated Continuous Pen Testing
- With advancements in AI tools, there will likely be an increase in automated continuous pen testing services.
- It can be a way for organizations to enhance vulnerability scanning beyond traditional methods.
Limitations Compared to Manual Pen Testing
- Automated continuous pen testing may find low-hanging fruit that vulnerability scanners miss but still falls short in terms of manual validation and verification.
- Manual intervention and thought process are often required for complex attacks, such as those involving multiple chains of actions.
Marginal Impact and Affordability
- Automated continuous pen testing may have a marginal impact on organizations that can afford it but may not necessarily need it.
- The cost of automated continuous pen testing is currently high, making it more accessible to organizations that already have robust security measures in place.
Automated SaaS Continuous Pen Testing
This section discusses the existing practice of automated SaaS continuous pen testing in DevSecOps pipelines. The speaker shares their perspective on its current implementation and potential future developments.
Existing Implementation in DevSecOps Pipelines
- Many companies already incorporate automated security checks, such as SaaS scans, into their development pipelines.
- These checks occur automatically before pushing code to production, helping address vulnerabilities early on.
Not a Novel Concept
- Automated SaaS continuous pen testing is not a new concept but has gained attention with the integration of AI technology.
- It can be seen as an enhanced form of vulnerability scanning rather than a replacement for manual pen testing.
Market Potential and Monetization
- Companies will likely market automated SaaS continuous pen testing as an advanced solution, allowing them to charge higher prices.
- However, the speaker questions whether this approach offers significant advantages beyond existing practices.
Conclusion
This section covers two main topics: the impact of AI model training on data pollution and the emergence of automated continuous pen testing. It highlights the importance of well-trained data in AI models and raises concerns about potential model pollution. Additionally, it explores the benefits and limitations of automated continuous pen testing compared to manual methods. The discussion also touches upon existing implementations of automated SaaS continuous pen testing in DevSecOps pipelines.
The Evolution of Security
In this section, the speaker discusses the evolution of security and how it is still relatively new in the context of computer security.
The Constant Need for Innovation
- The speaker mentions that conversations about automated vulnerability scanning and other advancements in security have been ongoing for years.
- They predict that in the future, there will be discussions about even more advanced automation techniques.
- The rush to lower costs and scale up security assessments has led to a potential dilution of their value.
Research and Development in Offensive Security
This section focuses on the importance of research and development (R&D) in offensive security due to the increasing effectiveness of defensive measures.
Increasing Barriers to Entry
- With the prevalence of EDR systems and other monitoring tools, defensive measures are becoming more effective at detecting attacks.
- Defenders are getting better educated on attacks and techniques, raising the bar for offensive security professionals.
- R&D is crucial for offensive security professionals to stay ahead and provide value to clients.
- Copy-pasting from online sources is no longer sufficient; deeper knowledge and expertise are required.
Future Challenges for Red Teaming
This section explores the challenges faced by red teamers as technology becomes more complex, training programs struggle to keep up with real-world developments, and barriers to entry increase.
Complex Technology Requires Deep Knowledge
- As technology advances, red teamers need a deep understanding of various systems, APIs, programming languages, etc., to effectively bypass defenses.
- Basic knowledge or copy-pasting scripts is no longer enough; specialized skills are necessary.
Uncertain Future for Red Teaming
- Training programs and certifications often lag behind real-world developments, making it challenging for aspiring red teamers to acquire relevant skills.
- The speaker expresses curiosity about the future of red teaming and offensive security, given the increasing complexity and barriers to entry.
The Popularity of Offensive Security
The offensive security industry is becoming more popular and trendy, attracting more people and leading to the development of new tools and research.
Offensive Security as a Trend
- Offensive security is seen as cool, trendy, and fun.
- The industry is growing in popularity and becoming a trade.
Increasing Interest in Offensive Security
- More people are getting into offensive security.
- There is an increase in the development of tools and release of research.
- R&D investment in offensive security will continue to progress.
Keeping Tools Internal
- Professional red teams often keep their tooling internal to avoid widespread use and potential burnout.
- Many teams develop their own tools or customize existing products for internal use.
- The trend is to keep tools and research in-house rather than sharing them publicly.
Developing R&D Teams
- Establishing dedicated R&D teams or customizing purchased products internally is a growing trend.
- This approach allows for tailored solutions and avoids burning valuable techniques or exploits.
Polarization in the Industry
- Some believe that everything should be free and public, including training, tools, and vulnerabilities (CVEs).
- Others argue for keeping everything closed doors.
- The answer lies somewhere in between, with some releasing publicly while others keep things internal.
Balancing Public Releases with Internal Knowledge
There is a tradeoff between sharing knowledge with the community through public releases and keeping effective techniques internal to provide value to clients.
Releasing Techniques Publicly
- If a technique becomes less effective or not used frequently, it can be released publicly to share knowledge with the community.
Keeping Effective Techniques Internal
- Specific techniques developed or tradecraft that provide value to clients are kept internal.
- Continual research on these techniques helps improve effectiveness and provide more value to clients.
Conclusion
The offensive security industry will continue to have a divide between those who publicly release their findings and those who keep things internal. Finding a balance between the two approaches is crucial for progress in the field.
Polarization in Offensive Security
- The industry is polarized between closed source and open source approaches.
- There will always be a clash between these two camps.
Releasing Publicly vs. Keeping Internal
- Some content will be released publicly, while other valuable techniques are kept internal.
- Sharing knowledge with the community can benefit everyone, but maintaining effective techniques internally is essential for providing value to clients.
Blog: https://offsec.blog/
Youtube: https://www.youtube.com/@cyberthreatpov
Twitter: https://twitter.com/cyberthreatpov
Work with Us: https://securit360.com