Search This https://go.fiverr.com/visit/?bta=214852&brand=fpBlog

Wednesday, April 24, 2024

Meta brings AI, video calls, and new styles to its Ray-Ban smart glasses

 

Meta brings AI, video calls, and new styles to its Ray-Ban smart glasses

 Although the idea of smart glasses has never really taken off, it has also never really succeeded. However, Meta appears to have broken the code with its Ray-Ban smart glasses, and it is now adding additional AI functions, video calling, and more to its smart glasses developed in collaboration with Ray-Ban.

Last year, Meta's second-generation smart glasses were introduced to much praise. The glasses have speakers, microphones, and a built-in camera to assist deliver helpful tools to you in an ambient manner. Even without an integrated monitor, it's a practical tool.

 


 

 Meta is already expanding on this concept in a number of significant ways.

The Ray-Ban Meta smart glasses are learning new skills thanks to some recent upgrades. The camera is used by two of them. The glasses' camera will work with Facebook Messenger and WhatsApp in a piece of marketing synergy. This is being released "gradually," but it looks like a really helpful tool for expressing your viewpoint.
 
 Furthermore, multimodal AI is now possible with the camera. Stated differently, Meta's AI may now be instructed to take a photo using the context of what you're viewing. After being tested late last year, this is now being made available to all US and Canadian eyewear owners in beta.

Meta provides an illustration of how this may be helpful:
 

 
 
 At last, new styles of Meta Ray-Bans are available. Preorders for the "Skyler" frame and a new low-bridge Headliner design are currently being accepted.

You can pre-order these new models from Ray-Ban and Meta's web store. A selection of styles of Meta Ray-Ban smart glasses are available for purchase from Amazon, Best Buy, and other shops.

As noted by 9to5Mac, a recent update from Meta also included Apple Music connectivity.
 
 
 
 https://go.fiverr.com/visit/?bta=214852&brand=fp

 

Tim Cook hints at new Apple Pencil 3 coming next month – here’s what the rumors say

 

Tim Cook hints at new Apple Pencil 3 coming next month – here’s what the rumors say

 

Apple Pencil


 Apple said on Tuesday that it will host a unique occasion on May 7. The artwork on the invitation clearly indicates that the event will be focused on iPad, even though the firm often keeps event subjects under wraps. An Apple Pencil is seen in the artwork. Apple CEO Tim Cook added gasoline to the flames by implying that a new Apple Pencil 3 will be released next month, at the very least.

 
 
 
 

Apple Pencil 3 is coming soon

 

 Cook posted the event's artwork on X along with the message, "Pencil us in for May 7!" The CEO also confirmed that Apple would undoubtedly introduce new iPads and maybe a new Apple Pencil during its special "Let Loose" event on May 7 by adding a pencil emoji to the tweet.

2018 saw the final release of a new generation of Apple Pencils (except from the less expensive USB-C model that debuted the previous year). At that time, the Apple Pencil received an upgrade that included magnetic charging, a new double-tap gesture, a matte finish, and improved grip. So what can we anticipate from the Apple Pencil 3 specifically?
 
 A couple distinct rumors suggest that Find My integration may be included in the upcoming Apple Pencil. In the same way as AirPods and AirTags make it easy for consumers to locate and track lost Apple Pencils, this would also help. Additionally, according to some speculations, the Apple Pencil 3 will include magnetic tips that can be switched out.

Additionally, many bits of information have been discovered by 9to5Mac that seem to indicate the new Apple Pencil will allow a type of "Squeeze" action. It's possible that the new Apple Pencil will recognize when a user presses down on its surface to carry out certain brief tasks, including signing documents or adding text.
 
 

What about the new iPads?

 New iPads will, of course, be on display at an iPad event. There are rumors that Apple may release new iPad Air and iPad Pro models. The next iPad Pro will be powered by the M3 chip and have OLED displays in addition to a smaller design. The iPad Air 6 will use the M2 chip and will be available in a bigger version with a 12.9-inch screen for the first time.

Furthermore, there are rumors that Apple has created a new Magic Keyboard that would eliminate the iPad's existing floating form and instead make it resemble a conventional laptop.

The special event from Apple is scheduled to occur online at 7 a.m. PT/10 a.m. ET.
 
 

 

 
 
 

Windows 11 Start menu ads are now rolling out to everyone

 

Windows 11 Start menu ads are now rolling out to everyone

 

Windows 11 Start menu

 

How to Use Photoshop’s New AI-powered Image Tools

 Although AI pictures are nothing new to Adobe, the company that makes Photoshop, today represents a significant advancement in its attempts to provide ethical, accessible generative AI. Users may now create whole photos from scratch in the Photoshop app for the first time ever without ever having to exit the application. The Firefly picture 3 model, which the business just released, powers the innovation along with features like backdrop generation, reference picture uploading, and iterative AI art. 

 


 


What is Adobe Firefly?

Adobe's version of an AI art generator, Firefly, has been gradually incorporated into Photoshop since last year, allowing for features like aspect ratio stretching generative expand. However, up until now, the only place to create original artwork has been the Firefly web app (aside from a few tricks involving generative fill on a blank canvas).

By restricting its training data to stock photos and artwork owned by Adobe, Firefly sets itself apart from other AI art models in an effort to make it safer for usage in commercial applications. The most recent model upgrade, Firefly Image 3, promises "higher-quality images" with an emphasis on lighting and composition as well as improved rapid comprehension. 



How to start generating AI images in Photoshop

 

 Download the Photoshop desktop beta first, as these features are still in the testing stages. Next, launch a new project, select "Generate Image" from the Contextual Task Bar. If it's not there, try searching in your Tools panel or going to Edit > Generate Image.

Next, type your prompt. Additionally, there have to be buttons in the Contextual Task Bar that allow you to apply Style Effects to your output and switch between Photo and Art as the Content Type. These can be used both before and after generation; the Properties Panel should also display them.
 
 

How to use Photoshop to generate an AI image with a reference

 You are not limited to creating photos from scratch with Photoshop. To help the AI decide what to create, you may also utilize an already-existing image.

Create a foundation image using the preceding steps before utilizing it as a reference while creating AI art in Photoshop. Next, choose Reference Image from the Properties Panel or Contextual Task Bar. To make your image more in line with your reference, upload it and then run your prompt again. In addition, Photoshop has several built-in reference photos that you may use in place of uploads.
 
 

 
 

How to tweak AI art generated in Photoshop

 The most recent Photoshop beta has a function called "Generate Similar" that lets you make minor adjustments to AI pictures that have previously been generated. This function basically functions similarly to the "Reference Image" functionality, except instead of forcing you to download and re-upload photos, it lets you utilize newly produced ones.

To create similar AI art, start by creating an AI image from scratch using the instructions in the second subheading of this article. Next, pick your photo and select Generate Similar from the Variations Panel's three dots icon or the Contextual Task Bar. From the Properties Panel, you can see the variants that you have produced.
 
 
 

How to generate an AI background in Photoshop

 Additionally, you may change the backgrounds in current photographs with newly created ones produced by AI. In order to accomplish this, first choose Import Image on an empty canvas, and then in the Contextual Task Bar or Discover Panel, pick Remove background.

After that, choose Generate Background from the Edit Menu or Contextual Task Bar. From there, you may create a picture from scratch using a procedure that is similar to this one.
 
 

How to enhance detail in Photoshop

The final AI feature in the new Photoshop beta is the ability to Enhance Detail. This is an adjustment to Generative Fill, which you can select in either the Contextual Task Bar or Edit Menu. Unlike Generate Image, Generative Fill will generate objects only in specific parts of your canvas.

Once you’ve generated an object with Generative Fill, navigate to the Properties Panel and then Variations, where you can pick a specific version of that object and click the Enhance Detail icon to increase its sharpness and general detail.

 

New non-AI features in Photoshop


 

 

Joining this new suite of AI features is the Adjustment Brush, which can apply non-destructive, non-AI-powered color and lighting edits to specific parts of an image. For instance, turning a section of blue hair into green hair.

To use the Adjustment Brush, select it within the Brush Tool in the Tools Panel. From there, choose your adjustments and paint where you’d like them to be applied. They’ll show up in a new layer that won’t change the underlying image file.

Alongside the Adjustment Brush, the new Photoshop beta also includes an improved font browser that will allow direct access to fonts stored in the cloud without requiring the user to leave the program.

 

 

 

 
 

Apple Finally Plans to Release a Calculator App for iPad Later This Year

 After more than 14 years since the iPad's inception, Apple is finally preparing an app for the gadget, according to a person with knowledge of the situation. 







All iPad devices that are compatible with iPadOS 18—which is anticipated to be presented on June 10 during Apple's annual developers conference WWDC—will come with a built-in calculator app.

A recurring joke on social media is that there isn't an official Calculator app on the iPad, despite users waiting for it to come out. iPad users have been using apps like PCalc and Calcbot, which are calculators available in the App Store, in the meantime.


The Calculator app on macOS 15 will be redesigned with interaction with the Notes app, a resizable window, a sidebar that displays recent computations, and other features, according to a report published by AppleInsider last week. It's probable that the new iPad software will serve as the model for the revised Mac version, even if we haven't independently verified those elements.


After the WWDC keynote, iPadOS 18 is anticipated to release its first beta. The upgrade is scheduled for general release in September.

If only Instagram and WhatsApp were available on the iPad.

Monday, April 22, 2024

iPhone 16 Pro: 5 biggest rumored camera upgrades

 


Come September, shutterbugs will have a lot to cheer about. At that time, we anticipate Apple will introduce the iPhone 16 Pro and 16 Pro Max, the company's newest high-end flagship models, which are expected to include more significant camera improvements than their predecessors.

Nothing on our ranking of the greatest camera phones to date has surpassed the iPhone 15 Pro Max, but that may change when the iPhone 16 Pro/Pro Max is released. Apart from the anticipated incorporation of new image processing algorithms, further hardware modifications may enable Apple to close the gap in terms of photo and video capture.


We can't wait to see what new features iOS 18 has in store, which may be unveiled this summer at Apple's WWDC 2024 conference, in addition to the hardware. AI is expected to have a more significant role in the history of the iPhone 16 Pro, which means it may tip the scales.


Larger camera sensors

 

Larger sensors are expected in the iPhone 16 Pro variants, which makes sense given that the primary camera will handle the most of the hard lifting. The primary 48MP will have a 1/1.14-inch sensor, up from the 1/1.28-inch sensor found in the iPhone 15 Pro and 15 Pro Max, according to Digital Chat Station.

Larger sensors often record more light, which leads to higher resolution and noticeably improved low light performance. Based on the numerous low light shots we took during our Samsung Galaxy S24 Ultra vs. iPhone 15 Pro Max picture session, we can say with certainty that this is accurate.

An upgraded 48MP ultrawide camera with a larger 1/2.6-inch sensor is another major upgrade rumored to be included in both models. This is an interesting rumor because, in addition to producing better photos overall, Apple may be able to use the same pixel binning technique used in the main camera of the standard iPhone 15 to provide optical quality if you plan to crop photos taken with the ultrawide camera later on.

Tetraprism telephoto lens design with iPhone 16 Pro

 


 

One of the biggest incentives to get the iPhone 15 Pro Max is because of its tetraprism telephoto camera design, which delivers a 5x optical zoom. That’s a longer reach over the iPhone 15 Pro’s shorter 3x optical zoom, but rumor has it that the iPhone 16 Pro will share the same tetraprism telephoto lens as the iPhone 16 Pro Max — effectively bringing 5x optical zoom to both new iPhone 16 Pro models.
 
 
Although this is fantastic news for the iPhone 16 Pro, it does take some of the enchantment away from those who were thinking about getting the iPhone 16 Pro Max. However, because the current iPhone 15 Pro and 15 Pro Max are $200 apart, it will be intriguing if Apple decides to raise the price of the iPhone 16 Pro. Apple may raise the price of the 16 Pro if both versions have the same telephoto reach.
 

6x telephoto zoom on iPhone 16 Pro Max

 

However, if a rumor about the iPhone 16 Pro Max getting a "ultra" telephoto lens turns out to be true, the existing price may remain. One of the first speculations about the iPhone 16 Pro Max camera was that the larger iPhone will have a somewhat longer 6x telephoto zoom.
It would make sense to properly separate it from the iPhone 16 Pro once more, and the larger zoom range would make it a terrific exclusive for the iPhone 16 Pro Max. As we've seen, the 5x telephoto camera on the iPhone 15 Pro Max regularly produces crisper, more detailed photos than the 3x telephoto camera on the iPhone 14 Pro Max.
 
 Because of this, the 25x digital zoom that the iPhone 15 Pro Max offers may not be surpassed by the iPhone 16 Pro Max. This small improvement in the telephoto camera over the previous year would be significant.
 
 

Reducing lens flare

 

 In photos shot with the iPhone 15 Pro and 15 Pro Max, lens flares still appear, albeit they might not be a major distraction for some. We experienced this firsthand when attempting to photograph the solar eclipse in April 2024 with the iPhone 15 Pro, however there are rumors circulating that Apple is attempting to resolve this problem.

To enhance the quality of the photos, Apple could cover the cameras of the iPhone 16 Pro and 16 Pro Max with a new anti-lens flare material. A novel atomic layer deposition (ALD) technology would be used to apply this coating, shielding images from lens flares.

 

Capture button


 

 Lastly, it appears that the iPhone 16 Pro and 16 Pro Max (as well as maybe the remainder of the iPhone 16 series) will have a capture button. The Capture button differs from the Action button, which was initially included with the iPhone 15 Pro models, in that it is directly related to the camera.

It is thought that the Capture button is a capacities button, capable of detecting various pressure levels to carry out certain camera-related tasks. For instance, you may designate a gentle push for capturing pictures and a longer, harsher press for starting videos.
 
 It's unclear if the Capture button would have any purpose other than using the camera after that. The Capture button's capacitive nature may allow it to serve as a camera focus function as well. Holding the button down slightly would lock in focus, and pressing it further would shoot a picture. The Action button may be configured to carry out similar functions.

 


Google Wallet for Wear OS might soon require PIN code before tap-to-pay


 

Google Wallet for Wear OS may be requesting the insertion of a PIN code prior to enabling tap-to-pay transactions, in line with more frequent authentication on Android phones.

There are currently very few instances of this kind; Wear OS users were never asked for a PIN prior to making a Google Wallet payment. All they had to do was launch the watch app and press.

 

Since we haven't been able to reproduce this on many Pixel Watch 2 transactions today, it could still be a test, a rolling release, or simply an app problem. Nevertheless, this shift makes some sense given that it follows the new phone behavior.

This move is obviously motivated by security concerns, yet it feels rather abrupt. One advantage of having a watch is that it is constantly with you. Wear OS is already rather cautious when it comes to requesting the PIN if it senses that the watch has been taken off your wrist excessively. This adds to the confusion around today's shift, suggesting in a way that Google Wallet doesn't trust Wear OS security.


This new behavior probably implies that tapping an app for the first time will always fail unless you know to launch it first (if there is a prompt or user interface) either through the app list/grid, the Quick Settings Tile on the Pixel Watch, or by having it as a shortcut on your watch face.

In contrast, you have to double-tap the side button if you want to pay with the Apple Watch.


You have three minutes after first unlocking your phone to utilize Google Wallet. After that, unless you open the app to "Verify it's you" or unless you always lock/unlock your smartphone before making a payment, tap-to-pay will not work and you will need to verify and tap again.




Google revealed earlier this week that this was a deliberate phone upgrade that was formally released under the title "Google Wallet enhances in-store payment experience with new authentication update": "Google Wallet contactless payments have never been safer. You may now choose to disable identity verification for transit fares and be requested to verify your identity before completing a payment using a PIN, pattern, fingerprint, or Class 3 biometric unlock.


Nevertheless, the form factor is not specified in the new support document. It doesn't specifically address smartwatches; it's obviously talking about phones.

Wear OS may have offered an option for those irritated by the phone transition, but not with this new behavior. There is no doubt that using your fingerprint to unlock is more convenient than using a PIN on a tiny screen. (In relation to Wear OS and PINs, the Pixel Watch ought to start supporting more than four numbers.)