Use NativeScript Plugin nativescript-background-http To Send And Store Camera Pictures On PHP Server

I searched and searched and asked on slack and asked on the NS FaceBook group and everything I could think of to figure this out.  I finally pieced it together and it’s working.  So do you want to let your app users take a picture with their camera, or use one in their library for something and you need that photo stored on your web server? We let users take before/progress/after pics from using our device.  We want to store these photos so we can see results ourselves and request use from our users for advertisements.  So here’s how you do it.

nativescript-background-http

You need to make sure and import your camera ui and also this plugin.  It’s extremely awesome and works great.  You can get it by running

tns plugin add nativescript-background-http

EDIT: For some reason when I put this on an actual device it wouldn’t save the photos.  When on the actual phone it has a problem with using knownFolders.currentApp() and a custom folder, so I change that line from

fs.knownFolders.currentApp().path + /saved_images;

to

fs.knownFolders.documents().path;

and it worked just fine.

After setting that up use the below code, or whatever parts of it you’d like, to make the NS function and push the upload to your server.  A couple notes though, I use a folder called /saved_images in my /app project directory to store the files before sending.  I’m not sure this is necessary but I wanted to resize to a maximum of 800px wide or height while retaining proportion.  It is extremely important this directory is created if you use this filepath or your script will fail and you won’t know why because error logging here doesn’t do anything.

Also, I store a global configuration variable for my api’s URL, so I use url: config.apiUrl in my request options.

https://gist.github.com/ChrisFlannagan/e9e32428935076419f24aec6a30a7381

Saving The Image In PHP

So the config.apiUrl points directly to my php file (ex:server.com/apifile.php).  In this file you need to access file_get_contents(‘php://input’) to access the streamed data coming in.  This is how you work with forms submitted using content type “application/octet-stream” to get the file data.  With this data you can simply save it however you’d like using file_put_contents.  There’s other security concerns you will need to address that I’m not going over here as they are different all different applications of this process.  But here’s a quick, simple line for saving the image to a /uploads/ directory:

$all_headers = apache_request_headers();
$newimg = file_put_contents('uploads/' . $all_headers['File-Name'], file_get_contents('php://input'));

Now your image has been saved!

Additional User Data Needed

If you are storing this image and attaching it to a user or any other kind of data you can pass these as header fields just as you did the File-Name.  In your JS just under the line:

"File-Name": filename

Add a comma and put the next field you might need such as:

"File-Name": filename,
"Email-Add": email

Enjoy!

Touch “Hold” Function: Button Held Down In NativeScript

Tonight I needed to run a function constantly while a button is being held down in my app.  I found the gestures functionality and it had what I needed but I had to do some work to get it to perform what I was after.

My task was this, when someone puts their finger down on a ‘Record’ button I need to store what the sound level was every 10/1000 of a second.  So here’s what I did.

Using Gestures Touch to Define “Hold”

When the page is loaded I assign the button element to a variable then I add the on touch function.  It’s important to declare that button’s variable at the beginning of your JS so it’s available in all functions.  Also important to require your view module and gesture.  Here’s a snippet that covers all of that:

var viewModule = require("ui/core/view");
var audio = require("nativescript-audio");
var timer = require("timer");
var gestures = require("ui/gestures");
var documents = fs.knownFolders.currentApp();
var recorder = new audio.TNSRecorder();
var isRec = false;
var timeRec;
var currentRecording = new Array();
var page;
var soundlevel;

exports.loaded = function(args) {
    page = args.object;
    soundlevel = viewModule.getViewById(page, "soundlevel");

    soundlevel.on(gestures.GestureTypes.touch, function (args) {
        if(args.action == "down" || args.action == "up") {
            recordVolume( args.action );
        }
    });
}

Now that I’m calling a function, let’s tell that function what to do.  We have our “soundlevel” view which is just a button with id=”soundlevel” and we’ve assigned an action of “on” for it when gesture touch occurs.  The gesture touch passes args that include “action” as a property.  That property can be “down”, “up” amongst other things, but these are what we need.  So now let’s build a function to handle the hold.

function recordVolume(action) {
    var recorderOptions = {
        filename: documents.path + "/recording.caf",
        infoCallback: function() {
            console.log();
        },

        errorCallback: function() {
            console.log();
        }
    };
    if( action == "up" ) {
        recorder.stop();
        isRec = false;
        timer.clearInterval(timeRec);
    } else {
        recorder.start( recorderOptions );
        currentRecording = new Array();
        isRec = true;
        timeRec = timer.setInterval( function() {
            var barWidth = Math.round((160+Number(recorder.getMeters(0, "average")))/160*100);
            currentRecording.push(barWidth);
        }, 10);
    }
}

Note: barWidth is an integer I end up assigning as a percentage of the screen width.  This way I can give a visualization of sound level.

Here we check wether the action is up or down and do our desired functionality based on that.  If it’s “up” that means the finger has been lifted from the screen so we stop everything.

If the action is “down” that means the user has just pressed the button.  So we user the timer library and set an interval.  Basically every 10/1000 of a second we perform a calculation of the sound level and push that to our array of sound levels.  Now we can analyze that and do what we want to do!

 

Monitor Sound Wave Decibels While Recording on iOS

You’re going to laugh at this.  My brother-in-law, David, told me an app idea he wanted me to build YEARS ago.  I had only written one crappy Android app, but when you write an app people start throwing you their ideas like crazy.

His idea, it was funny, but it was beyond my ability so I never really got going on it.  Now that NativeScript is here and I’m spending more time in the app world I decided to give it a go.  His idea? RankMyStank…

Farts

He wanted an app where you fart into the phone and it ranks your fart based on an array of factors: Length, depth, variety, pitch and so on.  I’m still working out that algorithm…

What I love about this idea is it made me really dig into NativeScript and try to figure out how to make this happen.  So I found the plugin nativescript-audio.  This boasted the ability to record and playback sounds easily in NS.

At first I couldn’t get it to work, so I put the project to the side.  After I released my first app into iTunes recently I had learned so much more about the framework and decided to give it another go.

I got the plugin working pretty quick.  As I toyed with it I began to realize there was no way to track the sound wave or get any details about the structure of the sounds being recorded.  You could just record then play.

Contributing

I’ve always wanted to contribute to an open source project but never felt like I was talented enough.  As I dug into figuring out how to accomplish this task I realized if I could figure it out, I could probably add it to the plugin.  This was motivation.  So, I set forth on my venture and here’s what I figured out.

Native Functionality

The cool thing about NativeScript is that you have access to all the native functions of iOS and Android.  Usually you don’t tap into them because there is such a great library of plugins already.  And there’s no way I could build this plugin from scratch at this point so altering the nativescript-audio plugin was my best option.

Inside the nativescript-audio plugin is a directory called src/ios/ and in here you find the functionality that communicates with the iOS native functions for audio.  The file recorder.js was my best bet so I popped it open.  There I saw:

TNSRecorder.prototype.start = function (options) {

As well as

TNSRecorder.prototype.stop = function () {

Amongst others, these guys told me that I found the spot where the functions I’ve been using in my javascript code to use this plugin were being defined.

Monitoring Recording

So I spent quite some time learning about the class I saw being used here: AVAudioRecorder.  This is the iOS class used to record sound.  I found out exactly how to use it to get the dB of sound waves being recorded in real time.  The function is called averagePowerForChannel() though there is also peakPowerForChannel().  From what I understand averagePowerForChannel() is closer to what the human ear hears so I went with that.

I also discovered that in order to use these functions you you must turn metering on.  It’s processor intensive so it is not on bey default.  In the end, this is all the code I added to the plugin to give us two new functions… one to detect if recording is happening and one to get us the current dB of sound being recorded.

TNSRecorder.prototype.isRecording = function () {
    var _this = this;
    return _this._recorder.recording;
};
TNSRecorder.prototype.getMeters = function (channel) {
    var _this = this;
    if (!_this._recorder.meteringEnabled) {
        _this._recorder.meteringEnabled = true;
    }
    _this._recorder.updateMeters();
    return _this._recorder.averagePowerForChannel(channel);
};

I wound up not even using the isRecording function but I figured it could be useful for others so I left it in after testing it.

Rewriting in TypeScript

So TypeScript is how the original plugin was built and it uses a compiler to create the .js files.  Now knowing this I had to write these functions in TS before submitting a pull request to the original plugin.  Fortunately that wasn’t too difficult.

public isRecording() {
  var _this = this;
  return _this._recorder.recording;
}

public getMeters(channel: number) {
  var _this = this;
  if(!_this._recorder.meteringEnabled) {
    _this._recorder.meteringEnabled = true;
  }
  _this._recorder.updateMeters();
  return _this._recorder.averagePowerForChannel(channel);
}

Using In NativeScript

So here’s what I have now just demoing the functions and showing they work.  I update a Label view’s width to show current dB coming in.  It’s fairly straight forward and a decent example of it’s use.

exports.onTap = function(args) {
    var recorderOptions = {
        filename: documents.path + "/recording.caf",
        infoCallback: function() {
            console.log();
        },
        errorCallback: function() {
            console.log();
        }
    };
    if( isRec ) {
        recorder.stop();
        isRec = false;
        timer.clearInterval(timeRec);
    } else {
        recorder.start( recorderOptions );
        isRec = true;
        timeRec = timer.setInterval( function() {
            var barWidth = Math.round((160+Number(recorder.getMeters(0)))/160*100);
            page.css = ".soundlevel { color:#FFF; padding: 10px; background-color: Brown; width:" + barWidth + "%; }";
        }, 10);
    }
}

Don’t Touch Things In /platform/

I learned this the hard way just like every thing else in here related to NativeScript.  It’s a young framework and there is not a ton of documentation or much of a community yet.  But I love it and the slack community is pretty active.

I finally finished an app and needed to get it in iTunes for my boss.  I started following build instructions using Xcode which involved following along with NativeScript’s docs and also getting linked over to Apple’s dev docs a lot.  Somewhere along the way I felt that I needed to make some changes to my app in Xcode so that it would be able to get submitted for review as it kept hitting errors when trying to upload.

Icons

Icons are handled in a way that’s not overly complex but super easy to screw up.  When you open your project that NativeScript generates you will see a lot of stuff going on.  One of those is the “Use Asset Catalog” option for your icons.

If you have any issues with icons when trying to upload to iTunes you will probably start toying with the Asset Catalog and your info.plist.  One word of advice: DON’T!

NativeScript handles this for you just fine.  Make sure you have all required icons in your /app/App_Resources/iOS directory (including your 120px ones! I.e. 40@3x and so on, these are new and screwed me up bad).

tns prepare ios

The point is, when you want to build your app just run tns prepare ios then open the project in Xcode.  You can have it handle some of the automated tasks you might need when it pops up its warnings, but don’t screw with the .plist file or the asset catalog stuff or you will spend an entire day like me trying to figure out what’s wrong.

When you run the prepare CLI command NativeScript builds out a fully functional Xcode project for you.  Other than dealing with setting up your Apple developer account and connecting it to Xcode there’s not much else for you to do.  When you are ready to build and submit to the app store all you have to do is make sure the device selected is “Generic iOS Device” hit Product > Archive then when the Organizer window opens Validate then Upload to iTunes.