Search Results

Search found 46813 results on 1873 pages for 'system shutdown'.

Page 941/1873 | < Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Unity Is The Swiss Army Knife of Game Console Mods

    - by Jason Fitzpatrick
    This expansive console modification blends over a dozen game systems into one unified console with a shared power source and controller. There are console mods and then there are builds like this. This impressive work in progress combines the hardware boards of multiple game systems into a single unified system that shares a single power source, video output, and controller. The attention to detail and outright gaming obsession and geekiness is definitely creeping to the top of the charts with this one. Hit up the link below to check out a detailed post about the build and see additional videos and photos. Bacteria’s Project Unity [via Hack A Day] HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • "Build your own website" - Developing a CMS with Vague Requirements on a Tight Deadline

    - by walnutmon
    I'm a Java developer in charge of making a product which allows clients to "build their own site". I've spent a lot of time looking into Liferay, as I don't have any experience in building CMSs, and want to either use it, or get ideas of how to build a decent system. The time line is short, requirements are vague, yada yada Is Liferay a good technology to work with when showing the client (who may be very low on computer expertise) a user interface to build a site? The thing is, I want the power and flexibility to avoid the learning curve in building a CMS like product, but I don't want to waste time learning a new technology only to find its over-kill, or can't do the simple - but uncommon and unimplemented - things that we are asked to add as features Ideally I'd like to provide multiple web interfaces to the core API to build the sites - one that is very powerful, and another that is watered down and easy to use.

    Read the article

  • La beta du Feature Pack est disponible pour Team Foundation Server 2010 et Project Server Integration

    La beta du Feature Pack est disponible Pour Team Foundation Server 2010 et Project Server Integration Microsoft vient d'annoncer la disponibilité de la beta du Feature Pack de Team Foundation Server 2010 et Projet Server Integration ce qui marque la fin des CTP(community technical preview). La beta du Feature Pack de Team Foundation Server 2010 et Project Server (TFS-PS) est disponible uniquement pour les abonnées MSDN et sur licence « Go Live », ce qui signifie qu'elle peut déjà être utilisée dans un environnement de production. Pour mémoire, Team Foundation Server est un outil de travail collaboratif accompagnant la suite Visual Studio Team System(VSTS). Il permet la gest...

    Read the article

  • Google Chrome user agent, wrong language

    - by B. Roland
    Hello! After some months, my Chrome(now 10.0.648.127 beta; but I tried with the lastest stable too) displayed some popular sites in English, instead of my Chrome & system language, which is Hungarian... I saw my User-Agent, which shows in Chrome: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.127 Safari/534.16 But in Firefox: Mozilla/5.0 (X11; U; Linux i686; hu-HU; rv:1.9.2.15) Gecko/20110303 Ubuntu/10.04 (lucid) Firefox/3.6.15, what is correct... My question is: How can I change my user-agent(maybe dynamically, by version)? I tried with google-chrome --user-agent "text", but it failed in the newest versions.

    Read the article

  • How do you decide site availability requirements?

    - by Nathan Long
    I work on a web application to file a specific kind of county taxes. Our company wants our state to mandate that counties must accept electronic filings (as opposed to paper) from any system that meets some sensible requirements for uptime, security, data validation, etc. (Yes, this would help us as a business, but it would also force county governments to be more efficient.) We're creating a draft of those requirements to be reviewed and tweaked with the state. One of the sections is "availability." We want to specify something reasonably high, but not so high that any unexpected problem will get us (or a competitor) penalized. How do we decide what's reasonable for availability requirements?

    Read the article

  • Clever DIY Display Showcases Game Consoles While Concealing Cables

    - by Jason Fitzpatrick
    How do you display all your vintage game consoles while keeping them in a clutter free and ready-to-play state? This wall-mounted display does a great job showing off the retro gear while keeping everything tidy. Courteys of German tinker and gamer Holger, the design of the display is deceptively simple. The wall mount is a basic 2×4 frame wrapped in black roofing batten (similar to the lightweight weed-fabric used in gardens). Screw-in mounts for the LACK shelves are positioned every foot or so going up the frame and a small slit in the fabric allows for hidden routing of the cables. While it looks like the consoles are simply on display, they’re actually all hooked up and ready to play. For more photos of the build, hit up the link below. LACK Video Console Shelf with Hidden Cables [IKEAHacker] 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • User is trying to leave! Set at-least confirm alert on browser(tab) close event!!

    - by kaushalparik27
    This is something that might be annoying or irritating for end user. Obviously, It's impossible to prevent end user from closing the/any browser. Just think of this if it becomes possible!!!. That will be a horrible web world where everytime you will be attacked by sites and they will not allow to close your browser until you confirm your shopping cart and do the payment. LOL:) You need to open the task manager and might have to kill the running browser exe processes.Anyways; Jokes apart, but I have one situation where I need to alert/confirm from the user in any anyway when they try to close the browser or change the url. Think of this: You are creating a single page intranet asp.net application where your employee can enter/select their TDS/Investment Declarations and you wish to at-least ALERT/CONFIRM them if they are attempting to:[1] Close the Browser[2] Close the Browser Tab[3] Attempt to go some other site by Changing the urlwithout completing/freezing their declaration.So, Finally requirement is clear. I need to alert/confirm the user what he is going to do on above bulleted events. I am going to use window.onbeforeunload event to set the javascript confirm alert box to appear.    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }    </script>See! you are halfway done!. So, every time browser unloads the page, above confirm alert causes to appear on front of user like below:By saying here "every time browser unloads the page"; I mean to say that whenever page loads or postback happens the browser onbeforeunload event will be executed. So, event a button submit or a link submit which causes page to postback would tend to execute the browser onbeforeunload event to fire!So, now the hurdle is how can we prevent the alert "Not to show when page is being postback" via any button/link submit? Answer is JQuery :)Idea is, you just need to set the script reference src to jQuery library and Set the window.onbeforeunload event to null when any input/link causes a page to postback.Below will be the complete code:<head runat="server">    <title></title>    <script src="jquery.min.js" type="text/javascript"></script>    <script language="JavaScript" type="text/javascript">        window.onbeforeunload = confirmExit;        function confirmExit() {            return "You are about to exit the system before freezing your declaration! If you leave now and never return to freeze your declaration; then they will not go into effect and you may lose tax deduction, Are you sure you want to leave now?";        }        $(function() {            $("a").click(function() {                window.onbeforeunload = null;            });            $("input").click(function() {                window.onbeforeunload = null;            });        });    </script></head><body>    <form id="form1" runat="server">    <div></div>    </form></body></html>So, By this post I have tried to set the confirm alert if user try to close the browser/tab or try leave the site by changing the url. I have attached a working example with this post here. I hope someone might find it helpful.

    Read the article

  • Flood fill algorithm for Game of Go

    - by Jackson Borghi
    I'm having a hell of a time trying to figure out how to make captured stones disappear. I've read everywhere that I should use the flood fill algorithm, but I haven't had any luck with that so far. Any help would be amazing! Here is my code: package Go; import static java.lang.Math.*; import static stdlib.StdDraw.*; import java.awt.Color; public class Go2 { public static Color opposite(Color player) { if (player == WHITE) { return BLACK; } return WHITE; } public static void drawGame(Color[][] board) { Color[][][] unit = new Color[400][19][19]; for (int h = 0; h < 400; h++) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { unit[h][x][y] = YELLOW; } } } setXscale(0, 19); setYscale(0, 19); clear(YELLOW); setPenColor(BLACK); line(0, 0, 0, 19); line(19, 19, 19, 0); line(0, 19, 19, 19); line(0, 0, 19, 0); for (double i = 0; i < 19; i++) { line(0.0, i, 19, i); line(i, 0.0, i, 19); } for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (board[x][y] != YELLOW) { setPenColor(board[x][y]); filledCircle(x, y, 0.47); setPenColor(GRAY); circle(x, y, 0.47); } } } int h = 0; } public static void main(String[] args) { int px; int py; Color[][] temp = new Color[19][19]; Color[][] board = new Color[19][19]; Color player = WHITE; for (int i = 0; i < 19; i++) { for (int h = 0; h < 19; h++) { board[i][h] = YELLOW; temp[i][h] = YELLOW; } } while (true) { drawGame(board); while (!mousePressed()) { } px = (int) round(mouseX()); py = (int) round(mouseY()); board[px][py] = player; while (mousePressed()) { } floodFill(px, py, player, board, temp); System.out.print("XXXXX = "+ temp[px][py]); if (checkTemp(temp, board, px, py)) { for (int x = 0; x < 19; x++) { for (int y = 0; y < 19; y++) { if (temp[x][y] == GRAY) { board[x][y] = YELLOW; } } } } player = opposite(player); } } private static boolean checkTemp(Color[][] temp, Color[][] board, int x, int y) { if (x < 19 && x > -1 && y < 19 && y > -1) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 18) { if (temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (y == 18) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y - 1] == YELLOW) { return false; } } if (y == 0) { if (temp[x + 1][y] == YELLOW || temp[x - 1][y] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } if (x == 0) { if (temp[x + 1][y] == YELLOW || temp[x][y - 1] == YELLOW || temp[x][y + 1] == YELLOW) { return false; } } else { if (x < 19) { if (temp[x + 1][y] == GRAY) { checkTemp(temp, board, x + 1, y); } } if (x >= 0) { if (temp[x - 1][y] == GRAY) { checkTemp(temp, board, x - 1, y); } } if (y < 19) { if (temp[x][y + 1] == GRAY) { checkTemp(temp, board, x, y + 1); } } if (y >= 0) { if (temp[x][y - 1] == GRAY) { checkTemp(temp, board, x, y - 1); } } } return true; } private static void floodFill(int x, int y, Color player, Color[][] board, Color[][] temp) { if (board[x][y] != player) { return; } else { temp[x][y] = GRAY; System.out.println("x = " + x + " y = " + y); if (x < 19) { floodFill(x + 1, y, player, board, temp); } if (x >= 0) { floodFill(x - 1, y, player, board, temp); } if (y < 19) { floodFill(x, y + 1, player, board, temp); } if (y >= 0) { floodFill(x, y - 1, player, board, temp); } } } }

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • HTG Explains: Why It’s Good That Your Computer’s RAM Is Full

    - by Chris Hoffman
    Is Windows, Linux, Android, or another operating system using a lot of RAM? Don’t panic! Modern operating systems use RAM as a file cache to speed things up. Assuming your computer is performing well, there’s nothing to worry about. While it may seem counterintuitive to those of us who remember our computers always being starved for RAM, high RAM usage means your RAM is being put to good use. Empty RAM is wasted RAM. HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8 How To Play DVDs on Windows 8

    Read the article

  • Who broke the build?

    - by Martin Hinshelwood
    I recently sent round a list of broken builds at SSW and asked for them to be fixed or deleted if they are not being used. My colleague Peter came back with a couple of questions which I love as it tells me that at least one person reads my email I think first we need to answer a couple of other questions related to builds in general.   Why do we want the build to pass? Any developer can pick up a project and build it Standards can be enforced Constant quality is maintained Problems in code are identified early What could a failed build signify? Developers have not built and tested their code properly before checking in. Something added depends on a local resource that is not under version control or does not exist on the target computer. Developers are not writing tests to cover common problems. There are not enough tests to cover problems. Now we know why, lets answer Peters questions: Where is this list? (can we see it somehow) You can normally only see the builds listed for each project. But, you have a little application called “Build Notifications” on your computer. It is installed when you install Visual Studio 2010. Figure: Staring the build notification application on Windows 7. Once you have it open (it may disappear into your system tray) you should click “Options” and select all the projects you are involved in. This application only lists projects that have builds, so don’t worry if it is not listed. This just means you are about to setup a build, right? I just selected ALL projects that have builds. Figure: All builds are listed here In addition to seeing the list you will also get toast notification of build failure’s. How can we get more info on what broke the build? (who is interesting too, to point the finger but more important is what) The only thing worse than breaking the build, is continuing to develop on a broken build! Figure: I have highlighted the users who either are bad for braking the build, or very bad for not fixing it. To find out what is wrong with a build you need to open the build definition. You can open a web version by double clicking the build in the image above, or you can open it from “Team Explorer”. Just connect to your project and open out the “Builds” tree. Then Open the build by double clicking on it. Figure: Opening a build is easy, but double click it and then open a build run from the list. Figure: Good example, the build and tests have passed Figure: Bad example, there are 133 errors preventing POK from being built on the build server. For identifying failures see: Solution: Getting Silverlight to build on Team Build 2010 RC Solution: Testing Web Services with MSTest on Team Build Finding the problem on a partially succeeded build So, Peter asked about blame, let’s have a look and see: Figure: The build has been broken for so long I have no idea when it was broken, but everyone on this list is to blame (I am there too) The rest of the history is lost in the sands of time, there is no way to tell when the build was originally broken, or by whom, or even if it ever worked in the first place. Build should be protected by the team that uses them and the only way to do that is to have them own them. It is fine for me to go in and setup a build, but the ownership for a build should always reside with the person who broke it last. Conclusion This is an example of a pointless build. Lets be honest, if you have a system like TFS in place and builds are constantly left broken, or not added to projects then your developers don’t yet understand the value. I have found that adding a Gated Check-in helps instil that understanding of value. If you prevent them from checking in without passing that basic quality gate of “your code builds on another computer” then it makes them look more closely at why they can’t check-in. I have had builds fail because one developer had a “d” drive, but the build server did not. That is what they are there to catch.   If you want to know what builds to create and why I wrote a post on “Do you know the minimum builds to create on any branch?”   Technorati Tags: TFS2010,Gated Check-in,Builds,Build Failure,Broken Build

    Read the article

  • Installing Ubuntu on a computer with USB 3.0 hardware

    - by Matt
    I'm installing Ubuntu 10.10 32-bit version on an HP Envy 15. I get the same problem these people have here: "unable to find a medium containing a live file system" error when installing but the question was never resolved. I spent so long researching and got so frustrated that I took my computer down to a shop and asked them to install it for me. It took them a while but they managed to get it installed. The reason for this error they had said was because Ubuntu didn't have the USB 3.0 drivers it needed to install properly. I'm reinstalling Ubuntu yet again and I've run into the same issue so my question is: does anyone know.. a) Where to get these USB 3.0 drivers? b) How to get them installed when installing the Ubuntu OS? Thanks, Matt

    Read the article

  • Change the Default Number of Rows of Tiles on the Windows 8 UI (Metro) Screen

    - by Lori Kaufman
    By default, Windows 8 automatically sets the number of rows of tiles to fit your screen, depending on your monitor size and resolution. However, you can tell Windows 8 to display a certain number of rows of tiles at all times, despite the screen resolution. To do this, we will make a change to the registry. If you are not already on the Desktop, click the Desktop tile on the Start screen. NOTE: Before making changes to the registry, be sure you back it up. We also recommend creating a restore point you can use to restore your system if something goes wrong. HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How What Are the Windows A: and B: Drives Used For?

    Read the article

  • How to Delete Your Metro Application’s Usage History in Windows 8

    - by Taylor Gibb
    Windows 8 includes an all new Task Manager, which brings a whole bunch of new features. One of my favorites is the App history tab, which allows geeks like us to monitor our applications resource usage. Sometimes you may wish to reset the counters though, so here’s how. Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It How To Delete, Move, or Rename Locked Files in Windows

    Read the article

  • Byte Size Tips: How to Disable the Useless Dashboard on Mac OS X

    - by The Geek
    After getting my new MacBook Air with the awesome battery life, I decided to give OS X a spin for a while to see how I liked it. About 34 seconds later, I encountered my first irritation: The stupid Dashboard feature is just completely useless. Here’s how to disable it. Note: overall, Mac OS X is a really great operating system. It’s just this one feature that makes no sense. Simply Remove Dashboard from Spaces     

    Read the article

  • Which hosted ecommerce solutions allow customization?

    - by Diego
    Following my previous question, I'm now evaluating the possibility of using a hosted platform for the ecommerce project I have to implement. Before I start "playing" with each one of them, I'd like ask if anybody knows which ones allow a good degree of customization. At the moment I'm looking at BigCommerce, but it seems that customization is limited to templates, while I need additional features which require PHP Coding. Also, I'd need to be able to import additional product data into the system, and I'd need to do this via code; I had a look at some integrations, but they gave me the impression that they all run on the rendered page via JavaScript. For example, if I want to show Facebook Reviews on a product, I'll have to add some JS that will fetch it and show it on the page. This is not optimal, as I must cater for people with JS disabled, therefore I'd need to run my own PHP code. Thanks again for all the opinions.

    Read the article

  • Can't Log in to Lubuntu 12.04 X Server

    - by isomorphismes
    As of rebooting yesterday I can't login as myself to the X server part of 64-bit Lubuntu 12.04. Same problem as Can not get passed the login screen but that solution didn't work for me. Troubleshooting steps I already took: I can log in as guest (with whatever window manager) to the graphic (X) view of Lubuntu. log in as myself into a virtual terminal. (In fact I'm writing this from w3m for that reason.) So I know my password is correct and that most aspects of the system are working. One of the top google results for "can't log into lubuntu" mentioned a disk-full problem on netbooks; I don't have that problem. Let me know if I need to paste any messages or config files to make this question clearer and I'll do so. $ ls -l /home total 12 drwxr-xr-x 99 me me 12288 May 26 14:16 me $ ls -ld /tmp drwxrwxrwt 16 root root 4096 May 26 15:46 /tmp

    Read the article

  • Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper

    - by Asian Angel
    This wonderful wallpaper features all of the Ubuntu Mascots together as stuffed animals and will make a perfect addition to your Ubuntu desktop. Ubuntu Wallpaper [via Web Upd8] Latest Features How-To Geek ETC Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions Access and Manage Your Ubuntu One Account in Chrome and Iron Mouse Over YouTube Previews YouTube Videos in Chrome Watch a Machine Get Upgraded from MS-DOS to Windows 7 [Video] Bring the Whole Ubuntu Gang Home to Your Desktop with this Mascots Wallpaper Hack Apart a Highlighter to Create UV-Reactive Flowers [Science] Add a “Textmate Style” Lightweight Text Editor with Dropbox Syncing to Chrome and Iron

    Read the article

  • Thunderbird uses the wrong browser

    - by Aaron Digulla
    I'm unable to make Thunderbird open the default browser. In the browser preferences, Chromium is selected as the default browser. It's also selected in "Default Applications" in System Settings. In Thunderbird, I read "Chrome (Default)" which is wrong on all levels: Chrome itself complains that it's not the default browser when I click a link inside Thunderbird. In all other places, that I could find, Chromium is the default Here is what I tried: I used update-alternatives --config x-www-browser to select chromium-browser as well (see How do I change the default browser?). And even when I select a different browser from the list in the Thunderbird preferences, it still opens Chrome. My current solution is to create a link from /usr/bin/google-chrome to chromium-browser. How can I force Thunderbird to use the browser I want??? EDIT I also updated gnome-www-browser (update-alternatives --config gnome-www-browser) after feedback from roadmr but that didn't help. At least sensible-browser opens Chromium, now, but Thunderbird is stubborn.

    Read the article

  • Trying to install postgresql:i386 on 12.04 amd64

    - by tim jackson
    Due to some legacy 32 bit libraries being used in postgresql functions I need to get a 32 bit install of Postgresql on a 64 bit native system. But it seems like there is a problem with the multiarch not seeing all.debs as satisfying dependencies. uname -a: 3.8.0-29-generic #42-precise-Ubuntu SMP x86_64 dpkg --print-architecture: amd64 dpkg --print-foreign-architecture: i386 apt-get install postgresql-9.1: returns postgresql : Depends: postgresql-9.1 but it is nto going to be installed postgresql-9.1:i386 : Depends: postgresql-common:i386 but it is not installable Depends: ssl-cert:i386 but it is not installable Depends: locales:i386 but it is not installable etc .. But I have installed ssl-cert_1.0.28ubuntu0.1_all.deb and locales_..._all.deb andpostgresql-common is an all.deb Does anyone have experience installing 32 bit packages on 64 bit systems that depend on packages that are all.debs. Or has anyone installed 32 bit postgres on 64 bit? Any help appreciated.

    Read the article

  • Provisioning Videos

    - by Owen Allen
    There are a couple of new videos up on the Oracle Learning Youtube channel about Ops Center's provisioning capabilities. Simon Hayler does a walkthrough of a couple of different procedures. The first video shows you how to provision Oracle Solaris zones. It explains how to create an Oracle Solaris Zone profile, and then how to apply it (using a deployment plan) to a target system. The second video shows you how to provision an x86 server with Oracle Solaris. This uses a very similar process - you create a OS provisioning profile, then use a deployment plan to apply it to the target hardware. The documentation goes over OS provisioning and zone creation in the Feature Guide, if you're looking for additional information.

    Read the article

  • Annoying security "feature" in Windows 2008 R2 burns me, but not DVD's

    - by Stan Spotts
    This stuff drives me nuts. I'm all for hardening servers, and reducing security footprints, but I always want the option to allow me to get work done versus securing my system. I use Windows Server 2008 R2 as my laptop OS for a number of reasons I don't need to review here. It's pimped out to work like Windows 7 for most things. But my DVD writer is crippled, and evidently it's on purpose: http://blogs.technet.com/askcore/archive/2010/02/19/windows-server-2008-r2-no-recording-tab-for-cd-dvd-burner.aspx I don't WANT to log in as the local administrator to burn a damned DVD.  WTF isn't this configurable through the registry, or better yet, group policy? There are no security settings that I should not have the option to enable or disable.

    Read the article

  • How to Create AppArmor Profiles to Lock Down Programs on Ubuntu

    - by Chris Hoffman
    AppArmor locks down programs on your Ubuntu system, allowing them only the permissions they require in normal use – particularly useful for server software that may become compromised. AppArmor includes simple tools you can use to lock down other applications. AppArmor is included by default in Ubuntu and some other Linux distributions. Ubuntu ships AppArmor with several profiles, but you can also create your own AppArmor profiles. AppArmor’s utilities can monitor a program’s execution and help you create a profile. Before creating your own profile for an application, you may want to check the apparmor-profiles package in Ubuntu’s repositories to see if a profile for the application you want to confine already exists. How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Allowing Access to HttpContext in WCF REST Services

    - by Rick Strahl
    If you’re building WCF REST Services you may find that WCF’s OperationContext, which provides some amount of access to Http headers on inbound and outbound messages, is pretty limited in that it doesn’t provide access to everything and sometimes in a not so convenient manner. For example accessing query string parameters explicitly is pretty painful: [OperationContract] [WebGet] public string HelloWorld() { var properties = OperationContext.Current.IncomingMessageProperties; var property = properties[HttpRequestMessageProperty.Name] as HttpRequestMessageProperty; string queryString = property.QueryString; var name = StringUtils.GetUrlEncodedKey(queryString,"Name"); return "Hello World " + name; } And that doesn’t account for the logic in GetUrlEncodedKey to retrieve the querystring value. It’s a heck of a lot easier to just do this: [OperationContract] [WebGet] public string HelloWorld() { var name = HttpContext.Current.Request.QueryString["Name"] ?? string.Empty; return "Hello World " + name; } Ok, so if you follow the REST guidelines for WCF REST you shouldn’t have to rely on reading query string parameters manually but instead rely on routing logic, but you know what: WCF REST is a PITA anyway and anything to make things a little easier is welcome. To enable the second scenario there are a couple of steps that you have to take on your service implementation and the configuration file. Add aspNetCompatibiltyEnabled in web.config Fist you need to configure the hosting environment to support ASP.NET when running WCF Service requests. This ensures that the ASP.NET pipeline is fired up and configured for every incoming request. <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel> Markup your Service Implementation with AspNetCompatibilityRequirements Attribute Next you have to mark up the Service Implementation – not the contract if you’re using a separate interface!!! – with the AspNetCompatibilityRequirements attribute: [ServiceContract(Namespace = "RateTestService")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class RestRateTestProxyService Typically you’ll want to use Allowed as the preferred option. The other options are NotAllowed and Required. Allowed will let the service run if the web.config attribute is not set. Required has to have it set. All these settings determine whether an ASP.NET host AppDomain is used for requests. Once Allowed or Required has been set on the implemented class you can make use of the ASP.NET HttpContext object. When I allow for ASP.NET compatibility in my WCF services I typically add a property that exposes the Context and Request objects a little more conveniently: public HttpContext Context { get { return HttpContext.Current; } } public HttpRequest Request { get { return HttpContext.Current.Request; } } While you can also access the Response object and write raw data to it and manipulate headers THAT is probably not such a good idea as both your code and WCF will end up writing into the output stream. However it might be useful in some situations where you need to take over output generation completely and return something completely custom. Remember though that WCF REST DOES actually support that as well with Stream responses that essentially allow you to return any kind of data to the client so using Response should really never be necessary. Should you or shouldn’t you? WCF purists will tell you never to muck with the platform specific features or the underlying protocol, and if you can avoid it you definitely should avoid it. Querystring management in particular can be handled largely with Url Routing, but there are exceptions of course. Try to use what WCF natively provides – if possible as it makes the code more portable. For example, if you do enable ASP.NET Compatibility you won’t be able to self host a WCF REST service. At the same time realize that especially in WCF REST there are number of big holes or access to some features are a royal pain and so it’s not unreasonable to access the HttpContext directly especially if it’s only for read-only access. Since everything in REST works of URLS and the HTTP protocol more control and easier access to HTTP features is a key requirement to building flexible services. It looks like vNext of the WCF REST stuff will feature many improvements along these lines with much deeper native HTTP support that is often so useful in REST applications along with much more extensibility that allows for customization of the inputs and outputs as data goes through the request pipeline. I’m looking forward to this stuff as WCF REST as it exists today still is a royal pain (in fact I’m struggling with a mysterious version conflict/crashing error on my machine that I have not been able to resolve – grrrr…).© Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  AJAX  WCF  

    Read the article

< Previous Page | 937 938 939 940 941 942 943 944 945 946 947 948  | Next Page >