Posts From June, 2017

Xamarin Workbooks 

Overview

Xamarin Workbooks provide a blend of documentation and code that is perfect for experimentation, learning, and creating guides and teaching aids.

Create a rich C# workbook for Android, iOS, Mac, WPF, or Console, and get instant live results as you learn these APIs.

Give it a try and let us know what you think, or if you run into any bugs. We have a forum too!

 

Installation and Requirements

Information on getting Workbooks installed on your Mac or Windows computer.

 

Interactive Workbooks

An introduction to how to use Xamarin Workbooks.

 

Samples

There are a wide variety of sample workbooks available, including those highlighted here.

 

Integration SDK

It is possible to extend Xamarin Workbooks with custom data representations and renderers. Integrations are written with C# and TypeScript/JavaScript.

Posted by Andrew Tierney Wednesday, June 28, 2017 9:35:00 AM Categories: .NET Xamarin

Adding Face Tracking and Live Recognition to your Android App 

Camera apps today can do so much more than just take perfect photos. Whether it’s adding filters to your image or letting you adjust focus and exposures manually, apps can essentially transform you into a professional photographer. While numerous apps in the app store let you do many things with the camera, there are others that do clever manipulation of your images in real time, such as adding a custom mask to your face, manipulating your face to look older or younger, and more. Social media apps today have this intelligence baked in, so you don’t have to open another app to make your selfies look extraordinary before you share them with friends and family.

Whether you’re building a consumer app that provides fun filters or an enterprise app that recognizes receipts, adding intelligent tires like these gives you an edge over others. With services like the Microsoft Cognitive Services API, you can quickly analyze and process your photos by adding just few lines of code.

In this post, you’ll learn how to display a camera within the Android app, add face tracking using Google’s Mobile Vision APIs and recognize the person in front of the camera using Microsoft Cognitive Services’ Face API.

Face Detection

Adding Face Tracking to Your App

Google’s Mobile Vision provides a set of APIs for “Detecting Face”, “Recognizing Text”, and “Scanning Barcodes”. For this example, I’ll be using the Face APIs to detect human faces from the live camera stream within the app.

To get started, add the Xamarin.GooglePlayServices.Vision NuGet package.

To access the camera on Android, you need to request the user’s permission and add the camera feature to your Android manifest:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

if (ActivityCompat.CheckSelfPermission(this, Manifest.Permission.Camera) == Permission.Granted)

{

    CreateCameraSource();

    //...

}

else

{

Log.Warn(TAG, "Camera permission is not granted. Requesting permission");

 

var permissions = new string[] { Manifest.Permission.Camera };

 

if (!ActivityCompat.ShouldShowRequestPermissionRationale(this,

        Manifest.Permission.Camera))

{

    ActivityCompat.RequestPermissions(this, permissions, RC_HANDLE_CAMERA_PERM);

    return;

}

 

Snackbar.Make(mGraphicOverlay, Resource.String.permission_camera_rationale,

        Snackbar.LengthIndefinite)

        .SetAction(Resource.String.ok, (o) => { ActivityCompat.RequestPermissions(this, permissions, RC_HANDLE_CAMERA_PERM); })

        .Show();

}

 

Create Camera Source

The FaceDetector.Builder instantiates the FaceDetector with the specified properties. In this example, we use the default face detector settings and associate a processor to it.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

private void CreateCameraSource()

{

 

    var context = Application.Context;

    FaceDetector detector = new FaceDetector.Builder(context)

            .SetClassificationType(ClassificationType.All)

            .Build();

 

    detector.SetProcessor(

            new MultiProcessor.Builder(this)

                    .Build());

 

    if (!detector.IsOperational)

    {

        // Note: The first time that an app using face API is installed on a device, GMS will

        // download a native library to the device in order to do detection.  Usually this

        // completes before the app is run for the first time.  But if that download has not yet

        // completed, then the above call will not detect any faces.

        //

        // isOperational() can be used to check if the required native library is currently

        // available.  The detector will automatically become operational once the library

        // download completes on the device.

        Log.Warn(TAG, "Face detector dependencies are not yet available.");

    }

 

    mCameraSource = new CameraSource.Builder(context, detector)

            .SetRequestedPreviewSize(640, 480)

            .SetFacing(CameraFacing.Front)

            .SetRequestedFps(30.0f)

            .Build();

 

    

}

 

In the above code, a MultiProcessor is used to receive detection results and CameraSource.Builder instantiates a camera source with a specified Preview Size, Camera Facing, and required fps.

Show a Live Stream from the Camera

Now that we have access to the camera and the CameraSource is already built, we can start the preview on a custom SurfaceView; in this case CameraSourcePreview.
GraphicOverlay instance is also passed in so that the face border can be drawn at the time detection.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

private void StartCameraSource()

{

 

    // check that the device has play services available.

    int code = GoogleApiAvailability.Instance.IsGooglePlayServicesAvailable(

            this.ApplicationContext);

    if (code != ConnectionResult.Success)

    {

        var dlg =

                GoogleApiAvailability.Instance.GetErrorDialog(this, code, RC_HANDLE_GMS);

        dlg.Show();

    }

 

    if (mCameraSource != null)

    {

        try

        {

            mPreview.Start(mCameraSource, mGraphicOverlay);

        }

        catch (System.Exception e)

        {

            Log.Error(TAG, "Unable to start camera source.", e);

            mCameraSource.Release();

            mCameraSource = null;

        }

    }

}

 

The ideal place to call the StartCameraSource() is in the OnResume() of the Activity, while the OnPause()should stop the preview. It will ensure the camera resources are used only when the user is using the app.

Detect Faces

Every image frame received from the camera source may contain multiple faces, and each face corresponds to a distinct face identity which is represented by the Tracker created by the multiprocessor.

Implement IFactory to capture each Face

MultiProcessor requires an implementation of IFactory to callback when a human Face is detected in the camera. In this example, IFactory is implemented within the MainActivity, that has the Create():

 

1

2

3

4

public Tracker Create(Java.Lang.Object item)

{

    return new GraphicFaceTracker(mGraphicOverlay, mCameraSource);

}

 

In the above code, a new instance of GraphicFaceTracker is created for every detected face, and each of it builds a face box graphic object over the video stream using the GraphicOverlay view passed in the constructor.

Below is the implementation of the GraphicFaceTracker

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

class GraphicFaceTracker : Tracker

{

    private GraphicOverlay mOverlay;

    private FaceGraphic mFaceGraphic;

 

    public GraphicFaceTracker(GraphicOverlay overlay)

    {

        mOverlay = overlay;

        mFaceGraphic = new FaceGraphic(overlay);

    }

 

    public override void OnNewItem(int id, Java.Lang.Object item)

    {

        mFaceGraphic.SetId(id);

    }

 

    public override void OnUpdate(Detector.Detections detections, Java.Lang.Object item)

    {

        var face = item as Face;

        mOverlay.Add(mFaceGraphic);

        mFaceGraphic.UpdateFace(face);

    }

 

    public override void OnMissing(Detector.Detections detections)

    {

        mOverlay.Remove(mFaceGraphic);

    }

 

    public override void OnDone()

    {

        mOverlay.Remove(mFaceGraphic);

    }

}

 

The FaceGraphic instance is created when a Face is detected the first time, updated as the face changes, and hidden when the face goes out of the frame.

That’s it; we successfully created face tracking on a custom camera surface within your app! Next, we’ll recognize the person within the frame.

Recognize a Face from the Live Stream

To recognize the face within the live video frame, we capture the image whenever a new face is detected and send it to Microsoft Cognitive Services APIs to identify the person. Face Recognition requires artificial intelligence and highly efficient machine learning algorithms, which are provided to you as a service and free to get started. If you’re new to Cognitive Services, I highly recommend reading the blog post Adding Facial Recognition to Your Mobile Apps.

Capture the Detected Face

To capture the newly detected Face, modify the GraphicFaceTracker to implement the CameraSource.IPictureCallback first.

 

1

2

3

4

class GraphicFaceTracker : Tracker, CameraSource.IPictureCallback

{

//...

}

And modify the OnNewItem() to capture the image frame with the Face

 

1

2

3

4

5

6

public override void OnNewItem(int id, Java.Lang.Object item)

{

    mFaceGraphic.SetId(id);

    if (mCameraSource != null && !isProcessing)

        mCameraSource.TakePicture(null, this);

}

 

Identify the Person in the Captured Frame

Finally, using the Helper Classes ImageAnalyzer and LiveCamHelper (that abstracts the COGs API call), we successfully identify the person in the video stream.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

public void OnPictureTaken(byte[] data)

{

    Task.Run(async () =>

    {

        try

        {

            isProcessing = true;

 

            Console.WriteLine("face detected: ");

 

            var imageAnalyzer = new ImageAnalyzer(data);

            await LiveCamHelper.ProcessCameraCapture(imageAnalyzer);

        }

 

        finally

        {

            isProcessing = false;

        }

    });

}

Download video: MP4 format

 

Wrapping Up

Face recognition is widely used in many scenarios, including security, natural user interface, robotics, and more. By using a combination of these services and platform APIs, you can build unmatchable mobile apps that are intelligent and provide five-star user experiences. In this blog post, we used Google Mobile Vision APIs to detect human faces from the Video Live Stream and Microsoft Cognitive Services to recognize the person within the frame. For building the same sample on iOS, read How To Display Camera in Your iOS apps and Adding Facial Recognition to Your Mobile Apps.

The sample built and explained in this post was inspired by the Google Developer documentation on Face Tracker.

Download the samples for both iOS and Android from my GitHub repoNote: For this demo, I’ve written the code to recognize a single face in the frame. However, feel free to modify and make it better.

 

Written on June 12, 2017 by
Nish Anil

Posted by Andrew Tierney Wednesday, June 28, 2017 8:44:00 AM Categories: .NET Xamarin

Interactive 3D with UrhoSharp and Workbooks 

Humans live in a three-dimensional world, which we navigate from the time we’re infants. We might, therefore, assume that 3D graphics programming would be an intuitively natural fit to our lifetime of experience; after all, we simply need to translate familiar concepts (left and right, up and down, forward and back) into an X-Y-Z coordinate system.

For most people, however, 3D graphics is quite daunting. It’s hard to get a mental feel for 3D objects and how they move through space, and the mathematics involved can be truly hairy. It’s very common for beginning 3D graphics developers to programmatically define a camera, light source, and a 3D object, and then see nothing at all on the screen.

Imagine if you could then fiddle with the code a bit, perhaps change that positive Z coordinate to a negative, for instance, and immediately see the new result.

This is what the combination of UrhoSharp and Xamarin Workbooks now offers. 3D graphics programming has become more accessible to developers, as well as more enticing, by becoming more interactive.

UrhoSharp is Xamarin’s C# wrapper for the open-source Urho3D cross-platform game engine. This is a powerful, industrial-strength 3D graphics API whose major deficiency is that it has so many features it can seem overwhelming to a 3D newcomer (and to developers who already have some 3D coding experience!).

Xamarin Workbooks are documents that combine text and code. Workbooks allow you to edit and execute code in tiny pieces and get immediate feedback. To get started, download the Workbooks application from the Xamarin Workbooks page, and then begin exploring the almost 100 Workbooks that already exist for a variety of platforms.

Getting Started

If you know some UrhoSharp, it might not seem like the optimal platform for Workbooks. Normally, you create a UrhoSharp application by deriving a class from Application, overriding several methods, and then calling a method named Run. This Run method is blocking; it doesn’t return until the application has completed. Within a Workbook, this application architecture is not conducive to an interactive experience.

For that reason, a new UrhoSharp class has been created specifically for Workbooks. This class is called SimpleApplication. You can simply call the static method SimpleApplication.Show with an optional ApplicationOptions argument within the Workbook, and you’re ready to go.

There are currently eight UrhoSharp Workbooks, with more in the works. Four of these eight are demonstrations of UrhoSharp features, while four others take a more tutorial approach suitable for aspiring 3D graphics programmers. You can run these Workbooks on a PC or Mac directly from the Workbooks page, or you can download the whole Workbooks GitHub repository. The UrhoSharp Workbooks are in the graphics/urhosharp directory.

One of the crucial aspects of 3D graphics programming is defining solid objects in 3D space. UrhoSharp provides several ways to do this. The most professional approach is to use specialized modeling software. UrhoSharp supports numerous model formats. The Animated Model Workbook is the shortest of the Urho Workbooks because it does little more than load in a model of a 3D mutant and animate it:

It’s also possible to create 3D shapes algorithmically by defining or deriving all the 3D vertices right in the Workbook. This approach is demonstrated in two Workbooks: Working with Urho Custom Geometries and Building Polyhedra with UrhoSharp.

The simplest approach, however, can be found in the Urho.Shapes namespace, which contains seven classes that let you make common geometric objects: BoxConeCylinderPlanePyramidSphere, and Torus. This is certainly the fastest way to get up and running with an UrhoSharp Workbook.

Creating The Workbook

To begin creating your own UrhoSharp Workbooks, you’ll want to download the Workbooks GitHub repository. In the directories for the existing UrhoSharp Workbooks, you’ll see three library files: one for the PC (mono-urho.dll), another for the Mac (libmono-urho.dylib), and the third that’s common to both platforms (Urho.dll). You’ll need these files in the same directory as your new Workbook file.

Note: Since we’re copying the library files specifically created for Workbooks, there’s no requirement to add the UrhoSharp library from NuGet.

Using the Workbooks application, create a new Console Workbook. In a code cell, add a reference for the UrhoSharp library and include two using directives:

 

1

2

3

#r "Urho.dll"

using Urho;

using Urho.Shapes;

 

Now create an instance of SimpleApplication:

 

1

SimpleApplication app = SimpleApplication.Show();

 

When that method executes, a window will pop up on your screen to host the UrhoSharp graphics. SimpleApplicationcreates a default camera, a point light source, and a root node. A 3D scene in UrhoSharp is a tree of nodes. This hierarchical node architecture turns out to be quite convenient when you want to reference a whole subset of 3D components with a single object.

Creating Objects

The following code creates a new node and associates it with the Box shape:

 

1

2

Node boxNode = app.RootNode.CreateChild("boxNode");

Box box = boxNode.CreateComponent<Box>();

 

When that code executes, you’ll see a box on the UrhoSharp window:

These simple shapes become much more versatile when you realize that you can apply transforms to them. You can move them around the scene, of course, but you can also apply scaling and rotation that’s different in the three dimensions:

 

1

2

boxNode.Scale = new Vector3(3, 0.25f, 0.25f);

boxNode.Rotation = Quaternion.FromAxisAngle(Vector3.UnitZ, 45);

 

Now the box has become a thin bar that’s no longer constrained to the axes of the 3D coordinate system:

And, of course, you can set a different color or use a bitmap for texturing. These shapes become a fast and easy approach to creating 3D graphics for visualization purpose, as demonstrated by the Charts Workbook, which uses the Box class to create a little bar chart:

You can also combine these basic shapes to create more complex objects. The Creating Compound Shapes Workbook exploits this technique to the max: it starts with a torus, two cylinders that are scaled to a skinny size, and two circles that are squashed to shapes that look like propeller wings:

Now the node hierarchy implemented by UrhoSharp comes in very handy, for nodes can be cloned and can have transforms applied that also impact all child nodes. This technique allows one arm to be cloned into four arms:

The Workbook goes on to describe the animations to make the wings rotate, and then lets you fly the quadcopter around the screen.

The latest Urho Workbook is called Building Polyhedra with UrhoSharp. This one might have a special appeal to anybody who spent at least part of their childhood making 3D figures out of folded cardboard.

The workbook begins with constructing a plain old dodecahedron:

The dodecahedron has 12 faces (as its name suggests), each of which is a pentagon. As a convex regular polyhedron, it’s one of five possible Platonic solids.

But then the Workbook builds upon this figure to make another figure that is certainly related but looks quite different. Each of the pentagon faces in the dodecahedron becomes the center pentagon of a five-pointed star:

This is a process known as stellating the polyhedron, and this particular result is called the small stellated dodecahedron. It’s still considered a regular polyhedron because all of the faces are the same five-pointed star, but it’s definitely not convex any more.

The Workbook then takes another step by replacing the five-pointed stars with pentagons whose vertices are the same as the points of the stars:

This is known as the great dodecahedron, and yes, another step can replace those pentagons with larger five-pointed stars to create the great stellated dodecahedron… but to see that, you’ll have to run the Workbook!

(It’s definitely more fun that way.)

Summary

Workbooks is an incredibly valuable learning tool, and the Workbooks demonstrate the power and versatility of UrhoSharp and the ability to create professional 3D objects and animations. Workbooks allows you to quickly and easily try new ideas, as well as create samples and tutorials for others.

Don’t forget to check out our guides on both Workbooks and UrhoSharp, as well as the Xamarin Workbooks page containing all our sample Workbooks.

Written on June 26, 2017 by
Charles Petzold

Posted by Andrew Tierney Wednesday, June 28, 2017 8:41:00 AM Categories: .NET Xamarin

UrhoSharp - 3D/2D Engine 

Cross Platform 3D/2D Game and Visualization Engine

PDF for offline use
Interactive:

UrhoSharp is a cross-platform high-level 3D and 2D engine that can be used to create animated 3D and 2D scenes for your applications using geometries, materials, lights and cameras.

UrhoSharp is distributed as a NuGet package that can be installed on either Visual Studio or Xamarin Studio on Mac or Windows and can be used to target any of the following platforms: Android, MacOS, iOS, tvOS and Windows.

An Introduction to UrhoSharp

This article provides a high-level overview of UrhoSharp and its capabilities for 3D visualization and for use in simple 3D games.

Using UrhoSharp

In this document we describe the core concepts of UrhoSharp that you would use to build a game or create a 3D visualization for your application.

Urho and Your Platform

These guides describe the setup instructions for Urho on each target platform and describe ways to integrate Urho with your existing Android and iOS applications.

Programming UrhoSharp With F#

This guide walks through the creation of a simple "Hello, World!" UrhoSharp solution using F# and Xamarin Studio.

API Documentation

You can browse the API documentation for UrhoSharp on our web site.

Samples

We have created samples on GitHub illustrating how to use UrhoSharp.

  • FeatureSamples shows more than 40 individual samples that showcase specific features of Urho.

  • SamplyGame is a sample implementation of the Shooty Skies game.

  • FormsSample showcases how to use UrhoSharp in Xamarin.Forms applications.

All the samples run on Android, iOS, Mac and Windows. There are also a number of UrhoSharp Workbooks including Planet Earth and Exploring Coordinates.

Copyright

This documentation contains original content from Xamarin Inc, but draws extensively from the open source documentation for the Urho3D project and contains screenshots from the Cocos2D project.

License

The UrhoSharp license is available at the http://download.xamarin.com/content/licenses/URHO.LICENSE

Posted by Andrew Tierney Wednesday, June 28, 2017 8:38:00 AM Categories: .NET Xamarin

Deep Dive into SkiaSharp with Xamarin.Forms 

Xamarin.Forms is a versatile cross-platform user-interface toolkit that allows a single program to run on iOS, Android, the Universal Windows Platform, and (most recently) macOS.

However, one feature missing from Xamarin.Forms is a graphics system. Sure, Xamarin.Forms can render bitmaps, and you can size and rotate a BoxView for some simple blocky images, but for those of us who love graphics programming, Xamarin.Forms has a big polygon-shaped hole.

But don’t fret! A very compelling solution for Xamarin.Forms graphics is SkiaSharp, Xamarin’s C# library that incorporates Google’s powerful Skia Graphics Engine.Old-school graphics with SkiaSharp.

SkiaSharp is a cross-platform, high-performance 2D API for .NET developers perfect for mobile development. The API is straightforward to work with and can be used to do simple things, like drawing shapes or text, all the way to complex real-time drawing applications. The Xamarin team has even created a WYSIWYG graphics designer that puts out SkiaSharp graphics code.

As a member of Xamarin’s documentation team, I’ve been engaged in writing what I hope to be an extensive series of bite-sized tutorials on using SkiaSharp in Xamarin.Forms. These articles represent a deep dive into SkiaSharp, but one that I hope can also serve as an introduction to 2D graphics for newcomers. SkiaSharp shares many concepts with other 2D graphics drawing systems, so learning SkiaSharp is a terrific way to get a general education in 2D vector graphics.

Most recently, I’ve finished several articles on SkiaSharp Transforms. SkiaSharp supports all the basic graphics transforms found in other graphics systems, including translation, scaling, rotation, and skewing, which tilts graphics objects such as this shadow:

Example of drawing shadows in SkiaSharp.

That image uses an SKPaint object to define the characteristics of the text output. For the shadow (which is drawn first), the code sets a different color and a combination of translation, scaling, and skew to enlarge the text and tilt it to one side:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

using (SKPaint textPaint = new SKPaint())

{

    textPaint.Style = SKPaintStyle.Fill;

    textPaint.TextSize = screenWidth / 6;

    textPaint.IsAntialias = true;

 

    // Common to shadow and text

    string text = "shadow";

    float xText = 20;

    float yText = screenHeight / 2;

 

    // Shadow

    textPaint.Color = SKColors.Gray;

    canvas.Save();

    canvas.Translate(xText, yText);

    canvas.Skew((float)Math.Tan(-Math.PI / 4), 0);

    canvas.Scale(1, 3);

    canvas.Translate(-xText, -yText);

    canvas.DrawText(text, xText, yText, textPaint);

    canvas.Restore();

 

    // Text

    textPaint.Color = SKColors.Blue;

    canvas.DrawText(text, xText, yText, textPaint);

}

 

SkiaSharp also supports standard matrix transforms, including non-affine transforms, which can achieve perspective and 3D rotation effects, such as the image at the top of this blog.

On mobile platforms and other modern devices, graphics is often intimately related to touch. Xamarin.Forms supports some rudimentary touch events, but not the essential tool of tracking individual fingers. More sophisticated touch tracking for Xamarin.Forms is now available with a Xamarin.Forms effect described in Invoking Events from Effects.

Combining SkiaSharp with touch tracking can result in some fun applications. I’ve recently created a Xamarin.Forms version of SpinPaint, based on a program I first wrote seven years ago for the Microsoft tabletop computer (originally called Surface and later renamed PixelSense).

SpinPaint simulates a rotating disk. Touch it or move your fingers on it, and it draws not only a line corresponding to your finger but three other lines that are mirror images, creating pretty patterns. Here’s SpinPaint running on iOS, Android, and the Universal Windows Platform:

Painting graphics with SkiaSharp

I think what impresses me most about SkiaSharp is its versatility. For example, other 2D graphics systems aren’t in total agreement about the best way to draw an arc, which is a curve on the circumference of an ellipse. Different graphics systems define the arc in very dissimilar ways.

SkiaSharp doesn’t force you into thinking of arcs in a particular way; rather, it supports three different ways to define an arc in a graphics path, so you can use the one that’s best for your particular application:

The angle arc requires that you specify a bounding rectangle for an ellipse, along with start angles and sweep angles:

 

1

path.AddArc(rect, startAngle, sweepAngle);

 

The resultant arc is shown here in red:

Angle arc drawn with SkiaSharp

This is identical to the AddArc and ArcTo methods of the Android Path object, and similar to the AddArc method of the iOS CGPath (although the iOS version is restricted to arcs on the circumference of a circle). This type of arc is great for pie charts.

The tangent arc is similar to the PostScript arct function and the iOS AddArcToPoint method. A radius of a circle is specified that is then fit snugly between two connected lines:

 

1

path.ArcTo(pointCorner, pointDestination, radius);

 

SkiaSharp draws a line and the arc, again shown in red:

Drawing a tangent arc with SkiaSharp.

The tangent arc is ideal for making rounded corners.

Finally, the elliptical arc allows you to specify two radii of an ellipse, as well as a rotation angle. The resultant tilted ellipse can fit between two points in two different ways, resulting in four different arcs between the two points, shown here in four different colors:

Drawing an elliptical arc with SkiaSharp.

You specify which of these four possible arcs you want with two further parameters:

 

1

ArcTo (radii, rotation, largeArcFlag, sweepDirectionFlag, point)

 

This type of arc specification is consistent with the elliptical arc supported by Scalable Vector Graphics (SVG) as well as the ArcSegment supported by XAML-based Windows graphics systems, including the Universal Windows Platform.

If you think arc-drawing is versatile, wait until you see SkiaSharp’s shaders and path effects! The shaders let you specify various gradients for filling in areas, including a linear gradient, radial gradient, conical gradient, sweep gradient, and random patterns called Perlin noise. The path effect can render a dotted or dashed line with any pattern of dots or dashes you want, but also render a line composed of little patterns:

SkiaSharp drawing a patterned texture.

Wrapping Up

There are still more SkiaSharp articles to come, so check the Using SkiaSharp in Xamarin.Forms page often to see what’s new.

Meanwhile, you’ll also want to register for Xamarin University Presents, a series of five must-see webinars in July, including one I’ll be doing on SkiaSharp Graphics for Xamarin.Forms.

Written on May 23, 2017 by
Charles Petzold

Posted by Andrew Tierney Wednesday, June 28, 2017 8:32:00 AM Categories: .NET Xamarin

The week in .NET – .NET Conf, Material Design in XAML Toolkit 

.NET Conf, September 19-21. Call for Content is Open!

.NET Conf

.NET Conf is a free, 3 day virtual developer event co-organized by the .NET community and Microsoft. Over the course of the three days developers have a wide selection of live sessions that feature speakers from the community and .NET product teams. These are the experts in their field and it is a chance to learn, ask questions live, and get inspired for their next software project.

Call for content is now open! Learn more about how you can present to the .NET community.

Package of the week: Material Design In XAML Toolkit

The Material Design In XAML Toolkit theme and control library implements Google’s Material Design language in XAML, for all major WPF Framework controls. It also adds new controls to support controls specific to Material Design, such as multi-action buttons and cards.

Material Design in XAML Toolkit

Meetup of the week: Decomposing a monolith into Microservices with the Open Web Interface for .NET in New York, NY

How do you build a microservice? What technologies does the .NET realm offer for us? And what if you don’t want to deploy each service independently? In this talk, Dennis Doomen will show you some of the pros and cons of microservices and how you can leverage OWIN and .NET to move your monolith into a bright new future.

Please join the New York ALT.NET Software Development Group on Wednesday, June 28, 2017
at 6:30 PM
.

.NET

ASP.NET

C#

F#

There is more content available this week in F# Weekly. If you want to see more F# awesomeness, please check it out!

Xamarin

Azure

UWP

Data

Game development

And this is it for this week!

Contribute to the week in .NET

As always, this weekly post couldn’t exist without community contributions, and I’d like to thank all those who sent links and tips. The F# section is provided by Phillip Carter, the gaming section by Stacey Haffner, the Xamarin section by Dan Rigby, and the Azure and UWP section by Michael Crump.

You can participate too. Did you write a great blog post, or just read one? Do you want everyone to know about an amazing new contribution or a useful library? Did you make or play a great game built on .NET?
We’d love to hear from you, and feature your contributions on future posts. Please add your posts, it takes only a second.

We pick the articles based on the following criteria: the posts must be about .NET, they must have been published this week, and they must be original contents. Publication in Week in .NET is not an endorsement from Microsoft or the authors of this post.

This week’s post (and future posts) also contains news I first read on The ASP.NET Community Standup, on Weekly Xamarin, on F# weekly, and on The Morning Brew.

 by Bertrand Le Roy 

 

Posted by Andrew Tierney Wednesday, June 28, 2017 7:54:00 AM Categories: .NET

The best of ARKit: Falcon 9 landing, Minecraft, Van Gogh bedroom tour, and more [Videos] 

- By Chance Miller

Apple officially unveiled its new ARKit platform earlier this month at WWDC, and since then developers have been playing with the feature and showing off just how powerful it is. We’ve highlighted some implementations of ARKit thus far, but head below for a full roundup of some of the best ARKit uses yet…

Apple’s ARKit framework aims to help developers build out augmented reality experiences and mixed reality experiences on the iPhone and iPad. Tim Cook has spoken very fondly of AR in the past and as more and more examples of ARKit emerge, it’s starting to become clear as to why he’s such a fan.

I regard it as a big idea, like the smartphone. The smartphone is for everyone, we don’t have to think the iPhone is about a certain demographic, or country or vertical market: it’s for everyone. I think AR is that big, it’s huge. I get excited because of the things that could be done that could improve a lot of lives. And be entertaining.

We’ve highlighted a few examples of ARKit over the last month, but it seems that neat projects are emerging almost every day. For instance, Tomas Garcia today shared an ARKit demo video that depicts the Falcon 9 landing in a swimming pool. Watch it below.

Falcon 9 landing at the ASDS in a swimming pool using ARkit and Unity.

Furthermore, another interesting demonstration from just today depicts using ARKit to draw in 3D. As you can see in the video blow, this is a really cool use of the augmented reality technology, and while it’s not exactly clear how this feature could be useful in real-life situations, it’s interesting nonetheless.

Watch drawing in 3D with ARKit below:

A more advanced video from Mark Dawson demonstrates the process of using ARKit with an interesting “virtual copy of Van Gogh’s bedroom,” then offering a tour of the room. The detail shown in this example is absolutely incredible…there are paintings on the wall, detailed furniture, and amazing tracking.

Dawson has shared a few other examples of his work with ARKit on Medium, while also offering sample code. Check it out here and watch his Van Gogh tour below.

Another recent video from Econsor Mobile shows a whole-home tour and suggests the idea of commissioning construction projects directly on the construction site using augmented reality. The developers also offer some insight on how ARKit compares to Google’s Tango initiative:

I personally think that the tracking is better on Tango for indoor use (much, much faster and more stable). But we had some issues when we tried Tango outside, there the tracking was quite bad. I love the fact of the massive amount of compatible devices with ARKit (iPhone 6s and above). For the next AR projects we will probably use ARKit.

A variety of other neat examples of ARKit have already hit YouTube, so check them out below:

Moon landing with ARKit

Minecraft

FK23 airplane

Naval battle

Parking a car with ARKit

ARKit extreme tracking

ARKit + Unity + Overwatch Widowmaker

Rainforest garage

Physically based rendering

BB-8 test

Chance Miller

@ChanceHMiller

Chance covers Apple for 9to5Mac and contributes to 9to5Google, Electrek, and 9to5Toys. He runs the weekly EduTech series.

Tips, questions, typos to chance@9to5mac.com

Posted by Andrew Tierney Tuesday, June 27, 2017 10:22:00 AM Categories: Apple