Social Media

WhatsApp Introduces Group Message History Feature for New Members


Joining a group chat midway through an ongoing conversation has often been confusing, leaving new members without context and forcing them to rely on others for updates. Addressing this common issue, WhatsApp has introduced a new feature called Group Message History, aimed at making group interactions more seamless and efficient.

With this update, group administrators can now share recent chat messages with newly added members, helping them quickly understand the flow of conversation. Previously, users joining a group could only see messages sent after they were added, which often led to communication gaps, especially in work-related groups, community discussions, and long-standing family chats. Members frequently had to request screenshots or ask others to resend important details, creating unnecessary back-and-forth.

The new feature offers flexibility and control to administrators. It does not function automatically, meaning admins must actively choose to share message history when adding a new member. They can also decide how many messages to include, with options ranging from 25 to 100 recent messages. This ensures that new participants receive enough context without overwhelming them with excessive information.

Administrators retain full authority over the feature, including the ability to enable or disable it based on group preferences. While regular members cannot share chat history, admins continue to have exclusive control over what is shared with newcomers.

Importantly, WhatsApp has clarified that the introduction of this feature does not compromise user privacy. All shared messages remain protected under the platformโ€™s end-to-end encryption system, ensuring that only group members can access the content. To maintain transparency, WhatsApp notifies existing group members whenever chat history is shared with a new participant. Additionally, these shared messages retain their original timestamps and sender details and are visually distinguished from regular messages for clarity.

The feature is currently being rolled out globally and will be available on both Android and iOS devices. Users who do not immediately see the update are advised to install the latest version of the app and wait for the rollout to reach their region. With Group Message History, WhatsApp aims to enhance user experience by reducing confusion, improving communication flow, and making group conversations more inclusive for everyone involved.

Business

Labour Market Resilience in Focus at India AI Impact Summit 2026


Labour market resilience emerged as a central theme at the India AI Impact Summit 2026 during a session titled โ€œGlobal Dialogue on AI Usage โ€“ Data for Labour Market Resilience.โ€ The discussion examined the changing nature of work amid accelerating artificial intelligence adoption and the policy choices required to manage the transition effectively.

Drawing on emerging international evidence, panellists noted that AIโ€™s impact on employment is differentiated across age groups, sectors and geographies. Early trends suggest that younger workers in roles with higher AI exposure may be experiencing employment pressures. However, the absence of comprehensive and comparable cross-country data continues to limit governmentsโ€™ ability to design timely and targeted interventions.

The discussion underscored the importance of moving forward with adaptive policy frameworks even in the absence of perfect information. Strengthening social protection systems, expanding reskilling pathways and designing context-specific strategies for sectors such as services, agriculture and public delivery were highlighted as essential steps to ensure inclusive growth.

Shamika Ravi, Member of the Economic Advisory Council to the Prime Minister, observed that India shows one of the highest levels of firm-level AI adoption, characterised by openness and optimism. While productivity effects are still being measured, she noted that AI in India is likely to be applied to long-standing challenges in health, education and services, particularly where last-mile connectivity constraints have limited outcomes.

Yoshua Bengio, Professor at Universitรฉ de Montrรฉal and a leading AI expert, stated that employment trends observed over the past five years are likely to continue shaping the job market. He cautioned that access to AI will increasingly become a competitive advantage, underscoring the need for international coordination and dialogue to ensure AI development benefits all.

Representatives from Microsoft and OpenAI highlighted that much of the existing evidence on AIโ€™s employment impact is concentrated in a few countries, particularly the United States, with limited data available from emerging economies. This gap makes it difficult to draw firm conclusions and reinforces the need for systematic global data collection on AI adoption and employment outcomes.

The session concluded that strengthening labour market resilience in the AI era will require better measurement of technology adoption, anticipatory governance, coordinated investments in skills and institutional capacity, and robust social protection systems. Only through such integrated efforts can productivity gains from AI translate into broad-based economic and social benefits.

TechPulse

OpenClaw Creator Peter Steinberger Joins OpenAI as Sam Altman Accelerates AI Agent Strategy


OpenAI CEO Sam Altman announced that Peter Steinberger, the creator of the viral AI agent OpenClaw, is joining OpenAI as the company sharpens its focus on next-generation autonomous AI systems. Altman confirmed that OpenClaw will continue to operate as an open-source project under a foundation model, with OpenAI providing ongoing support.

OpenClaw, previously known as Clawdbot and Moltbot, was launched just last month by Steinberger and quickly gained momentum across social media and developer communities. Its rapid rise reflects the growing demand for AI agents capable of independently completing tasks, making decisions, and taking actions on behalf of users without constant human oversight. Businesses and consumers alike are increasingly experimenting with AI systems that can handle workflows, research, communication, and operational processes autonomously.

In a post on X, Altman said Steinberger would join OpenAI โ€œto drive the next generation of personal agents,โ€ describing him as โ€œa genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people.โ€ Altman added that intelligent agents are expected to become core to OpenAIโ€™s product offerings in the near future.

Although financial terms were not disclosed, the move underscores the intensifying competition for AI talent across the technology sector. Earlier this year, OpenAI acquired former Apple designer Jony Iveโ€™s AI devices startup io for more than $6 billion. Technology giants including Meta and Google have also been investing billions to attract top AI researchers and developers.

OpenAI, most recently valued at $500 billion, faces mounting competition in the generative AI market, particularly from Anthropic. Anthropicโ€™s Claude models have been gaining traction among enterprise clients, especially with tools such as Claude Code. The company recently introduced Claude Opus 4.6, which it says improves coding capabilities, sustains tasks for longer durations, and delivers higher-quality professional output. Anthropic was reportedly valued at $380 billion in a fundraising round earlier this week.

OpenClaw has also expanded quickly in China, where it can integrate with locally developed language models such as DeepSeek and be configured for use with domestic messaging platforms. Chinese search engine Baidu plans to offer users of its main smartphone application direct access to OpenClaw.

However, some researchers have expressed concerns about the openness of OpenClaw and the potential cybersecurity risks posed by highly customizable AI agents that users can modify extensively. As AI systems become more autonomous and interconnected, the balance between innovation, openness, and security is expected to remain a central issue in the rapidly evolving artificial intelligence landscape.

With Steinberger joining OpenAI and OpenClaw continuing as an open-source initiative, the company appears determined to strengthen its leadership in the emerging era of intelligent AI agents capable of operating with greater independence and collaboration.

Fit & Fabulous

Designer Kate Barton Teams Up with IBM and Fiducia AI for Immersive NYFW Presentation


At New York Fashion Week, designer Kate Barton unveiled her latest collection with an innovative twist, merging high fashion with cutting-edge artificial intelligence. In collaboration with Fiducia AI and IBM, Barton introduced a multilingual AI agent built with IBM watsonx on IBM Cloud, offering guests an interactive and immersive runway experience.

The activation allows attendees to identify pieces from the collection in real time using a Visual AI lens powered by IBM watsonx. Beyond recognition, the tool answers questions in multiple languages via voice and text and enables photorealistic virtual try-ons, effectively creating what Barton describes as โ€œa portal into the collectionโ€™s worldโ€ rather than deploying artificial intelligence for novelty alone.

Speaking ahead of the show in an interview with TechCrunch, Barton emphasised that technology has long been part of her creative thinking. She expressed interest in blending the real and the unreal to spark curiosity, explaining that todayโ€™s technology expands the world around the clothes and shapes how audiences enter the story behind a collection. For her, the objective was not automation but deeper engagement โ€” creating moments that make viewers pause and look twice.

Ganesh Harinath, Founder and CEO of Fiducia AI, explained that the activation relied on IBM watsonx, IBM Cloud and IBM Cloud Object Storage. He noted that while model tuning was complex, the real challenge lay in orchestrating the system into a seamless, production-grade experience. The collaboration marks Bartonโ€™s continued experimentation with AI, following earlier technological integrations in past collections.

The broader fashion industry remains cautiously curious about artificial intelligence. Barton observed that many brands are quietly using AI in operational capacities but hesitate to showcase it publicly due to reputational concerns. She compared the hesitation to the early days of e-commerce, when luxury houses debated whether they should even launch websites โ€” a question that later evolved into how effectively they used them.

Industry voices suggest that while AI adoption is growing, much of its current use remains surface-level, such as chatbots or internal productivity tools. Barton, however, envisions a future where AI enhances prototyping, visualisation and production decisions, while preserving the human craftsmanship that defines fashion. She has made it clear that technology must elevate, not erase, the people behind the work.

According to industry projections shared during the conversation, AI in fashion could become mainstream by 2028, with deeper operational integration by 2030. Leaders within IBM Consulting highlighted how connecting inspiration, product intelligence and real-time engagement can transform AI from a novelty feature into a strategic growth engine.

Yet for Barton, the ultimate goal remains clear. The future of fashion, she argues, is not automated fashion. It is fashion that embraces new tools to heighten craft, deepen storytelling and broaden access โ€” without diminishing the human creativity that makes garments meaningful. At NYFW, that vision stepped confidently onto the runway, offering a glimpse of how art and algorithm might coexist in the next chapter of design.

TechPulse

X Admits Lapse in India, Removes 3,500 Grok Posts and Deletes 600 Accounts Over Objectionable Content


Written by Tanisha Cardozo || Team Allycaral

Microblogging platform X has acknowledged lapses in handling objectionable content generated by its AI chatbot Grok, leading to the removal of approximately 3,500 posts and the deletion of over 600 accounts in India. The action came about a week after the Ministry of Electronics and Information Technology raised serious concerns over obscene and sexually explicit content linked to the AI tool.

Officials aware of the development said the company accepted its mistake and committed to complying with Indian laws. According to a communication shared with authorities, X assured that it would not allow obscene imagery going forward. However, neither MeitY nor X issued an official public statement detailing the timeline or scope of the action taken.

Grok, developed by Elon Muskโ€™s xAI and integrated into X, has faced intense scrutiny globally after users exploited its image-generation and editing capabilities to create non-consensual and sexualised deepfake images, including those involving women and minors. These images spread rapidly on the platform, prompting investigations by regulators in multiple countries. Indonesia has already suspended access to Grok, while authorities in the European Union and the UK have launched probes into the toolโ€™s safeguards.

MeitY formally wrote to X on January 2, flagging what it described as serious failures in preventing obscene content generated using Grok. The ministry warned that continued non-compliance could result in X losing its safe harbour protection under Section 79 of the Information Technology Act. X sought an extension to respond, citing the Christmas and New Year holidays, with the deadline set for January 7.

Officials indicated that the ministry was dissatisfied with Xโ€™s initial response, which largely reiterated existing user policies without detailing concrete enforcement actions. This prompted MeitY to seek a more detailed report outlining specific steps taken against offending content and accounts. The government also clarified that Grok would be treated as a content creator rather than merely a platform tool, a classification that could significantly impact intermediary liability.

The ministry noted that misuse of Grok was not limited to fake accounts but also targeted women who uploaded their own photos or videos, which were then manipulated using AI prompts. The letter cited violations under multiple Indian laws, including provisions of the IT Act, the Bharatiya Nyaya Sanhita, the Indecent Representation of Women (Prohibition) Act, and the Protection of Children from Sexual Offences Act.

X was directed to comprehensively review Grokโ€™s prompt processing, output generation, image handling and safety guardrails, and to enforce strong deterrent measures such as account suspensions and terminations. MeitY officials have stated that compliance by X and other platforms will continue to be closely monitored, warning that any recurrence of violations could invite stricter action.

The controversy has also drawn political attention, with Shiv Sena (UBT) MP Priyanka Chaturvedi accusing X of monetising harmful behaviour after restricting Grokโ€™s image-generation feature to paid users. The episode underscores growing global concerns around AI-generated content, especially as reports indicate a sharp rise in AI-generated abuse imagery worldwide, intensifying calls for stricter regulation and accountability.