Search Results

Search found 50704 results on 2029 pages for 'application fork'.

Page 2/2029 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Profile Picture Thumbnails, Following Projects, and Fork Collaboration

    [Do you tweet? Follow us on Twitter @matthawley and @adacole_msft] We deployed a new version of the CodePlex website last week. Profile Picture Thumbnails We have added a way to select a thumbnail from your profile picture, which will start appearing next to usernames across the site.  Managing your thumbnail is simple. From your profile page, choose Edit your profile.  On the left side, you’ll find an intuitive widget for choosing a profile picture, uploading it, and editing your thumbnail image. If you previously uploaded a profile picture, we’ve used that to generate a starter thumbnail. We welcome your suggestions and ideas for areas where seeing user thumbnails would be useful or interesting. Following Projects Based on some feedback we’ve received recently, we have taken several steps to help you discover and follow interesting and popular projects on CodePlex: The homepage now surfaces the top Projects Users are Following from the previous 7 days. When you visit any project homepage, you can see at a glance how many people follow the project. When you visit the People tab for any project, you will see both the project contributors and the 25 most recent project followers. Fork Collaboration We now support enabling collaborators on a fork based on a large number of user requests.  From the Source Code management page for your fork, you will now see the following on the right side: To add a collaborator, type in a username and click Add. All fork collaborators will have the ability to push to the fork and send/cancel pull requests.  To remove a collaborator, hover over user, and click on the X that appears: The CodePlex team values your feedback, and is frequently monitoring Twitter, our Discussions and Issue Tracker for new features or problems. If you’ve not visited the Issue Tracker recently, please take a few moments to log an idea or vote for the features you would most like to see implemented on CodePlex.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • local web application vs desktop application speed?

    - by Josh
    Which one would be faster - a local web app gui made with something like qooxdoo or a desktop app? How much speed difference would there be expected? I would prefer creating a web app which could in the future be shared than creating a desktop gui which is specialized on certain gui toolkits.

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • confusing fork system call

    - by benjamin button
    Hi, i was just checking the behaviour of fork system call and i found it very confusing. i saw in a website that Unix will make an exact copy of the parent's address space and give it to the child. Therefore, the parent and child processes have separate address spaces #include <stdio.h> #include <sys/types.h> int main(void) { pid_t pid; char y='Y'; char *ptr; ptr=&y; pid = fork(); if (pid == 0) { y='Z'; printf(" *** Child process ***\n"); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); sleep(5); } else { sleep(5); printf("\n ***parent process ***\n",&y); printf(" Address is %p\n",ptr); printf(" char value is %c\n",y); } } the output of the above program is : *** Child process *** Address is 69002894 char value is Z ***parent process *** Address is 69002894 char value is Y so from the above mentioned statement it seems that child and parent have separet address spaces.this is the reason why char value is printed separately and why am i seeing the address of the variable as same in both child and parent processes.? Please help me understand this!

    Read the article

  • problem with fork()

    - by john
    I'm writing a shell which forks, with the parent reading the input and the child process parsing and executing it with execvp. pseudocode of main method: do{ pid = fork(); print pid; if (p<0) { error; exit; } if (p>0) { wait for child to finish; read input; } else { call function to parse input; exit; } }while condition return; what happens is that i never seem to enter the child process (pid printed is always positive, i never enter the else). however, if i don't call the parse function and just have else exit, i do correctly enter parent and child alternatingly. full code: int main(int argc, char *argv[]){ char input[500]; pid_t p; int firstrun = 1; do{ p = fork(); printf("PID: %d", p); if (p < 0) {printf("Error forking"); exit(-1);} if (p > 0){ wait(NULL); firstrun = 0; printf("\n> "); bzero(input, 500); fflush(stdout); read(0, input, 499); input[strlen(input)-1] = '\0'; } else exit(0); else { if (parse(input) != 0 && firstrun != 1) { printf("Error parsing"); exit(-1); } exit(0); } }while(strcmp(input, "exit") != 0); return 0; }

    Read the article

  • Avoiding a fork()/SIGCHLD race condition

    - by larry
    Please consider the following fork()/SIGCHLD pseudo-code. // main program excerpt for (;;) { if ( is_time_to_make_babies ) { pid = fork(); if (pid == -1) { /* fail */ } else if (pid == 0) { /* child stuff */ print "child started" exit } else { /* parent stuff */ print "parent forked new child ", pid children.add(pid); } } } // SIGCHLD handler sigchld_handler(signo) { while ( (pid = wait(status, WNOHANG)) > 0 ) { print "parent caught SIGCHLD from ", pid children.remove(pid); } } In the above example there's a race-condition. It's possible for "/* child stuff */" to finish before "/* parent stuff */" starts which can result in a child's pid being added to the list of children after it's exited, and never being removed. When the time comes for the app to close down, the parent will wait endlessly for the already-finished child to finish. One solution I can think of to counter this is to have two lists: started_children and finished_children. I'd add to started_children in the same place I'm adding to children now. But in the signal handler, instead of removing from children I'd add to finished_children. When the app closes down, the parent can simply wait until the difference between started_children and finished_children is zero. Another possible solution I can think of is using shared-memory, e.g. share the parent's list of children and let the children .add and .remove themselves? But I don't know too much about this. EDIT: Another possible solution, which was the first thing that came to mind, is to simply add a sleep(1) at the start of /* child stuff */ but that smells funny to me, which is why I left it out. I'm also not even sure it's a 100% fix. So, how would you correct this race-condition? And if there's a well-established recommended pattern for this, please let me know! Thanks.

    Read the article

  • Free Windows Application Blocker/Monitor

    - by Click Ok
    I want a free application monitor that when detects certain keywords on window title by example, it closes the application (or prevents that it opens/install). Nice extra will be if the program log the application activity too and internet sites accessed by any browser. Thank you very much! PS: I'm using Windows 7 Ultimate.

    Read the article

  • How to comply with this guideline for submitting an application to the Software Center?

    - by George Edison
    I was reading through the Ubuntu Developer Programme Agreement for submitting applications to the Software Center and stubled across the following clause: 3.1 You must first test Apps you submit to confirm they are compatible with all currently supported versions of Ubuntu (as listed on Canonical's website at the date of submission by you) and your Apps must comply with the Publishing Policy. Does this mean I must install both the 32 and 64 bit versions of Ubuntu 8.04, 10.04, 10.10, 11.04, and 11.10? If so, that's 10 installations of Ubuntu - is that really feasible (even with virtual machines)? Alternatively, does anyone have suggestions for testing the application without actually installing each version? Some sort of chroot tool, perhaps?

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • State management using the Application class in ASP.Net applications

    - by nikolaosk
    I have explained some of the state mechanisms that we have in our disposal for preserving state in ASP.Net applications in various posts in this blog. You can have a look at this post , this post , this post and this one . I have not presented yet an example in using the Application class/object for preserving state within our application. Application state is available globally in an application.The way we access Application State is through the HttpApplication object's Application property. Let...(read more)

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Advantages and disadvantages of building a single page web application

    - by ryanzec
    I'm nearing the end of a prototyping/proof of concept phase for a side project I'm working on, and trying to decide on some larger scale application design decisions. The app is a project management system tailored more towards the agile development process. One of the decisions I need to make is whether or not to go with a traditional multi-page application or a single page application. Currently my prototype is a traditional multi-page setup, however I have been looking at backbone.js to clean up and apply some structure to my Javascript (jQuery) code. It seems like while backbone.js can be used in multi-page applications, it shines more with single page applications. I am trying to come up with a list of advantages and disadvantages of using a single page application design approach. So far I have: Advantages All data has to be available via some sort of API - this is a big advantage for my use case as I want to have an API to my application anyway. Right now about 60-70% of my calls to get/update data are done through a REST API. Doing a single page application will allow me to better test my REST API since the application itself will use it. It also means that as the application grows, the API itself will grow since that is what the application uses; no need to maintain the API as an add-on to the application. More responsive application - since all data loaded after the initial page is kept to a minimum and transmitted in a compact format (like JSON), data requests should generally be faster, and the server will do slightly less processing. Disadvantages Duplication of code - for example, model code. I am going to have to create models both on the server side (PHP in this case) and the client side in Javascript. Business logic in Javascript - I can't give any concrete examples on why this would be bad but it just doesn't feel right to me having business logic in Javascript that anyone can read. Javascript memory leaks - since the page never reloads, Javascript memory leaks can happen, and I would not even know where to begin to debug them. There are also other things that are kind of double edged swords. For example, with single page applications, the data processed for each request can be a lot less since the application will be asking for the minimum data it needs for the particular request, however it also means that there could be a lot more small request to the server. I'm not sure if that is a good or bad thing. What are some of the advantages and disadvantages of single page web applications that I should keep in mind when deciding which way I should go for my project?

    Read the article

  • Why "Fork me on github"?

    - by NoBugs
    I understand how Github works, but one thing I've been confused about is, why almost every OSS project lately has a "Fork me on Github" link on their homepage. For example, http://jqtjs.com/, http://www.daviddurman.com/flexi-color-picker/, and others. Why is this so common? Is it that they want/need code validation, checking for security/performance improvements that they may not know how to do? Is it meant to show that this is a collaborative project - you're welcome to add improvements? Do they work for Github, or want to promote their service? Oddly enough, I don't think I've seen a "Fork project on Bitbucket" logo recently. My first reaction to that logo was that the project probably needs to be modified (forked) in order to integrate it with anything useful - or that they are encouraging fragmented codebase, encouraging everyone to make their own fork of the project. But I don't think that is the intent.

    Read the article

  • C - fork() and sharing memory

    - by Ben
    I need my parent and child process to both be able to read and write the same variable (of type int) so it is "global" between the two processes. I'm assuming this would use some sort of cross-process communication and have one variable on one process being updated. I did a quick google and IPC and various techniques come up but I don't know which is the most suitable for my situation. So what technique is best and could you provide a link to a noobs tutorial for it. Thanks.

    Read the article

  • Fork to shell script and terminate original process with Haskell

    - by Neth
    I am currently writing a Haskell program that does some initialization work and then calls ncmpcpp. What I am trying to do is start ncmpcpp and terminate the Haskell program, so that only ncmpcpp is left (optionally, the program can keep running in the background, as long as it's unintrusive) However, even though I am able to start ncmpcpp, I cannot interact with it. I see its output, but input appears to be impossible. What I am currently doing is: import System.Process (createProcess, proc) ... spawnCurses :: [String] -> IO () spawnCurses params = do _ <- createProcess (proc "ncmpcpp" params) return () What am I doing wrong/What should I do differently?

    Read the article

  • How stable are Single Page Application (SPA) build with Microsoft .Net for enterprise application [on hold]

    - by Husrat Mehmood
    Imagine a situation where you have your data loading to your application via REST Api,you are building a responsive application(ajax request) for an Enterprise. What potential problems might I run into for a single page application(SPA) using Microsoft Asp.Net Web application build using MVC template? Are there advantages to just designing a multi-page application using asp.net mvc 5 remember I am using SPA for an Enterprise Application where there are role based views for the users.?

    Read the article

  • What to choose API based server or Socket based server for data driven application

    - by Imdad
    I am working on a project which has a Desktop Application for MAC/COCOA, a native application for iPhone another native application in iPad. All the application do almost same thing. The applications are data driven applications. Every communication to server is made via a restful API developed in PHP. When a user logs in a lot of data is fetched from server. And to remain in sync with server pooling is done. As there are lot of data to pool it makes application slower and un-reliable. A possible solution that comes into my mind is to use Socket based server. My question is that will it reasonably improve the performance? And which technology (of sockets) will be good as a server side solution for data driven application? I have heard a lot about Node.js. Please give your suggestions.

    Read the article

  • What's the best way to clean up after a fork bomb?

    - by raldi
    $ ls bash: no more processes Uh oh. Looks like someone made a fork bomb. Where I used to work, this pretty much meant that the shared server would need to be power-cycled, since even the sysadmins with root often couldn't get the problem cleaned up. Often, they couldn't even get a prompt. I've heard a few tricks (notably, to send STOP signals rather than KILL signals, since the latter would allow the remaining threads to immediately replace the killed ones), but I've never seen a comprehensive guide entitled So, You Have Yourself a Fork Bomb? Let's make one.

    Read the article

  • Application that will install application

    - by user23950
    I'm thinking of a software that comes with pc decrapifier. Which can do the exact opposite of what pc decrapifier can do. Install a number of applications in one click. Is there an application like that?Not a web app please, I already know of that but I forgot the name, so if you know that please comment.

    Read the article

  • How to protect Ubuntu from fork bomb

    - by dblang
    I heard someone talking about a fork bomb, I did some research and found some dreadful information about some strange looking characters people can have you type at the command line and as a result do bad things on the computer. I certainly would not issue commands I do not understand but one never knows what can happen. I heard that some OS allows the administrator to place some limit on user processes to mitigate the effects of fork bombs, is this protection in Ubuntu by default or would a person with sudo privilege have to set this? If so, how?

    Read the article

  • How to protect Ubuntu from fork bomb

    - by dblang
    I heard someone talking about a fork bomb, I did some research and found some dreadful information about some strange looking characters people can have you type at the command line and as a result do bad things on the computer. I certainly would not issue commands I do not understand but one never knows what can happen. I heard that some OS allows the administrator to place some limit on user processes to mitigate the effects of fork bombs, is this protection in Ubuntu by default or would a person with sudo privilege have to set this? If so, how?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >