Search Results

Search found 8323 results on 333 pages for 'execute'.

Page 109/333 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • What happened to .fx files in D3D11?

    - by bobobobo
    It seems they completely ruined .fx file loading / parsing in D3D11. In D3D9, loading an entire effect file was D3DXCreateEffectFromFile( .. ), and you got a ID3DXEffect9, which had great methods like SetTechnique and BeginPass, making it easy to load and execute a shader with multiple techniques. Is this completely manual now in D3D11? The highest level functionality I can find is loading a SINGLE shader from an FX file using D3DX11CompileFromFile. Does anyone know if there's an easier way to load FX files and choose a technique? With the level of functionality provided in D3D11 now, it seems like you're better off just writing .hlsl files and forgetting about the whole idea of Techniques.

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • Assign keys to commands in Terminal?

    - by NES
    Is there a solution to assign special key combinations to words in terminal use. For example the less command is very usefull and i use i a lot to pipe the output of another process through it. The idea would be to set up special key combinations that are only active in terminal use assigned to write different commands? So pressing CTRL + l in terminal window could write | less or CTRL + G could stand for | grep Note: i just mean adding the letters to commandline not execute the finally. A similar way what's tabcompletion but more specific.

    Read the article

  • Is MVC now the only way to write PHP?

    - by JasonS
    Hey... its XMAS Eve and something is bugging me... yes, I have work on my mind even when I am on holiday. The vast amount of frameworks available for PHP now use MVC. Even ASP.net has its own MVC module. I can see the attraction of MVC, I really can and I use it frequently. The only downside that I can see is that you have to fire up the whole system to execute a page request. Depending on your task this can be a little wasteful. So the question. In a professional environment is this the only way to use PHP nowadays or are their other design methods which have alternative benefits?

    Read the article

  • How to automatically visit a website in the background after Ubuntu booted?

    - by Bakhtiyor
    I wanted to know how to automatically visit a website in the background when Ubuntu loads. As far as I know w3m is for visiting web site from the console. That is why I am writing following command in the crontab -e. @reboot w3m http://example.com/ > test_file The reason for writing content of the web site into a test_file is just to know whether this command has been executed or no. Unfortunately it is not executing every time Ubuntu loads. But next command which comes after it and looks like this: @reboot date >> reboot_file is being executed every time. What is wrong with my command? When I execute it in the console it outputs the content of the example.com into test_file. Is there any other options to do that?

    Read the article

  • Fix Windows MBR using Ubuntu Live CD

    - by kova
    I'm trying to fix the MBR using Ubuntu live CD. I already have the ms-sys installed but from the threads that I saw, I'm not completely sure in which /dev I should execute the command: sudo ms-sys --mbr7 /dev/??? (is it mbr7 the correct option when using Windows 7?) ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x1f205b1f Device Boot Start End Blocks Id System /dev/sda1 * 38 38 0 0 Empty /dev/sda2 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda3 206848 155854847 77824000 7 HPFS/NTFS/exFAT /dev/sda4 155854848 625137663 234641408 7 HPFS/NTFS/exFAT ubuntu@ubuntu:~$ Why is /dev/sda1 empty? I'm trying to fix the MBR because I'm getting a black screen when trying to load the operating system.

    Read the article

  • Help me make a cronjob/screen command please?

    - by Josip Gòdly Zirdum
    Hi guys I want to set up a cronjob on reboot to do this cd /home/admin/vivalaminecraft.com && screen -d -m -S mcscreen && mono McMyAdmin.exe The issue is when I execute this it seems to create the screen but doesn't do the mono McMyAdmin.exe in the screen... Is there like a then command ? so it does 1. then 2. then 3. ? Could someone please help out :) So I tried this: so I did this: @reboot screen -dmS minecraft @reboot cd /home/admin/vivalaminecraft.com @reboot mono McMyAdmin.exe It still doesn't work. The screen is created but it doesn't have the mono execution in it I put this in it #!/bin/bash screen -dmS minecraft; cd /home/admin/vivalaminecraft.com; mono McMyAdmin.exe; is this correct?

    Read the article

  • How to open the console in different browsers?

    - by Šime Vidas
    Chrome: Press CTRL + SHIFT + I to open the Developer Tools. Click on the "Open console." icon in the bottom left corner. IE9: Press F12 to open the developer tools. Open the Script tab, click the "Console" button on the right. Firefox 4: Press CTRL + SHIFT + K to open the Web console. What about Opera 11 and Safari 5? Clarification: By console I mean the JavaScript console that lets you input and execute JavaScript code.

    Read the article

  • Open Grid Engine or Akka/Something more fault tolerant?

    - by Mike Lyons
    My use case is that I have a pipeline of independent, stand alone programs, that I want to execute in a certain order on specific pieces of data that our output from previous pipeline stages. The pipeline is entirely linear and doesn't do anything in terms of alternate paths through the pipe. I'm currently using SGE to do this and it works OK, however occasionally a job will overstep it's memory bounds, fail, and all jobs that require that output data will fail. The pipe needs to be restarted in that case, and it seems that whatever is providing the fault tolerance in akka might solve that for me?

    Read the article

  • How to visit automatically a website in the background after Ubuntu loads

    - by Bakhtiyor
    I wanted to know how to automatically visit a website in the background when Ubuntu loads. As far as I know w3m is for visiting web site from the console. That is why I am writing following command in the crontab -e. @reboot w3m http://example.com/ > test_file The reason for writing content of the web site into a test_file is just to know whether this command has been executed or no. Unfortunately it is not executing every time Ubuntu loads. But next command which comes after it and looks like this: @reboot date >> reboot_file is being executed every time. What is wrong with my command? Because when I execute it in the console it output content of the example.com into test_file. Is there any other options to do that?

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • How to make a link to a .desktop [Desktop Entry] file

    - by Gonzalo
    I made a link on my Desktop to the launcher file "Compiz" in /usr/share/applications/. When I try to execute it I get: "The application launcher "Link to compiz.desktop" has not been marked as trusted. If you do not know the source of this file, launching it may be unsafe." So my question is how to make such a launcher on my Desktop? Otherwise, what kind of file are these [Desktop Entry] files and how can they be executed (by double clicking on them) if they have permissions such as: -rw-r--r-- 1 root root 396 2010-12-17 15:23 compiz.desktop

    Read the article

  • how does server communication work in a flash game with a php backend

    - by Tim Rogers
    I am trying to create a browser game using actionscript/flash. Currently, I'm trying to understand how I would go about creating a back-end which interfaced with my MySQL database. As far as I understand, If I create a php file on a webserver called test.php and then navigate to a webpage hosted on the server eg. www.example.com/test, the php script will run and display the result in my browser. This would use http. Is this how communication between client and server usually works in a flash game? for example, if the game needed to query the db. Would actionscript have to essentially invoke the url of the php script that would execute the query? it could then parse the data and use it. If this is the case, then is JSON considered a good way to transfer data over http?

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Help me understand a part of Java Language Specification

    - by Software Engeneering Learner
    I'm reading part 17.2.1 of Java language specification: http://docs.oracle.com/javase/specs/jls/se7/html/jls-17.html#jls-17.2.1 I won't copy a text, it's too long, but I would like to know, why for third step of sequence they're saying that If thread t was removed from m's wait set in step 2 due to an interrupt Thread couldn't get to step 2 it wasn't removed from wait set, because it written for the step 1: Thread t does not execute any further instructions until it has been removed from m's wait set Thus thread can't be removed from wait set in step 2 whatever it's due to, because it was already removed. Please help me understand this.

    Read the article

  • At which architecture level are you running BDD tests (e.g. Cucumber)

    - by Pete
    I have in the last year gotten quite fond of using SpecFlow (which is a .NET port of Cucumber) I have used it both to test a ASP.NET MVC application at the web layer, i.e. using browser automation, but also at the controller layer. The first gives me a higher confidence in the correctness of the application, because JavaScript is tested, and improper controller configuration is also caught. But those tests are slower to execute, and more complex to implement, than those just testing on the controller layer. My tests are full functional tests, i.e. they exercise all layers of the application, all the way down to the database. So the first thing before any scenario is that the database is cleared of data, allowing the test to assume that only data specified in the "Given" block exists. Then I see example on how to use it, where they test just exercise the model layer. So what are your experiences with these tools? Which layer of the application do you test?

    Read the article

  • ~/.xinput.d folder is ignored in Ubuntu 13.04

    - by CaptSaltyJack
    It used to be that you could make a file ~/.xinput.d/en_US and put xinput commands in there, such as enabling drag lock. Now, for some reason, in 13.04 this does not work. Anyone know why this changed, and how to set these? I suppose I could just put the xinput commands in a script file and have it execute upon login. I'm just wondering why the old method stopped working. EDIT: Current file /etc/X11/xinit/xinput.d/en_US: xinput set-prop 17 316 1 xinput set-prop 17 317 350 But I've realized that for some reason, the touchpad ID changes. Right now it's 15. Also, the actual properties such as "Drag Lock" can change. So this method doesn't work.

    Read the article

  • Ubuntu doesn't boot after 11.10 upgrade from 11.04

    - by Gio Borje
    Had this problem earlier: Crash after update to 11.10, from 11.04; ran the solution and had everything updated and upgraded. I get the following lines after booting with some of the preceding start, stop an [OK] lines as the other topic. * Checking battery state... [ 21.640534] btusb_intr_complete: hci0 urb ffff88011e948480 failed to resubmit (19) [ 21.640690] btusb_bulk_complete: hci0 urb ffff88011e948cc0 failed to resubmit (19) [ 21.640734] btusb_bulk_complete: hci0 urb ffff88011e9480c0 failed to resubmit (19) I run tty2 and execute startx to start the GUI that I'm using right now but Ubuntu won't boot without it.

    Read the article

  • comparison of an unsigned variable to 0

    - by user2651062
    When I execute the following loop : unsigned m; for( m = 10; m >= 0; --m ){ printf("%d\n",m); } the loop doesn't stop at m==0, it keeps executing interminably, so I thought that reason was that an unsigned cannot be compared to 0. But when I did the following test unsigned m=9; if(m >= 0) printf("m is positive\n"); else printf("m is negative\n"); I got this result: m is positive which means that the unsigned variable m was successfully compared to 0. Why doesn't the comparison of m to 0 work in the for loop and works fine elsewhere?

    Read the article

  • What features would you like to have in PHP? [closed]

    - by StasM
    Since it's the holiday season now and everybody's making wishes, I wonder - which language features you would wish PHP would have added? I am interested in some practical suggestions/wishes for the language. By practical I mean: Something that can be practically done (not: "I wish PHP would guess what my code means and fix bugs for me" or "I wish any code would execute under 5ms") Something that doesn't require changing PHP into another language (not: "I wish they'd drop $ signs and use space instead of braces" or "I wish PHP were compiled, statically typed and had # in it's name") Something that would not require breaking all the existing code (not: "Let's rename 500 functions and change parameter order for them") Something that does change the language or some interesting aspect of it (not: "I wish there was extension to support for XYZ protocol" or "I wish bug #12345 were finally fixed") Something that is more than a rant (not: "I wish PHP wouldn't suck so badly") Anybody has any good wishes? Mod edit: Stanislav Malyshev is a core PHP developer.

    Read the article

  • Real performance of node.js

    - by uther.lightbringer
    I've got a question concerning node.js performance. There is quite lot of "benchmarks" and a lot of fuss about great performance of node.js. But how does it stand in real world? Not just process empty request at high speed. If someone could try to compare this scenario: Java (or equivalent) server running an application with complex business logic between receiving request and sending response. How would node.js deal with it? If there was need for a lot of JavaScript processing on server side, is node.js really so fast that it can execute JavaScript, and stand a chance against more heavyveight competitors?

    Read the article

  • Ubuntu software center crashes when i try to download anything

    - by vaggelas
    I was trying to set up an old printer by installing several drivers, and I got a problem in software center. I am unable to download anything from it. It opens, but when I try to download something it drops an error message and the installation stops. I can do all the installations from command line, so it is not a major problem, but still will be nice having it again. I reinstall it several times from terminal trying all the solutions I found in the forum, but still the problem is here. Any help is appreciated. This is the error I get. traceback usr/share/softwarecenter/backend/installbackend_impl/aptd.py line 383 in install defer = true DbusException : org.freedesktop.Dbus.Error.Spawn.Execfailed: failed to execute program /usr/lib/dbus-1.0/dbus-daemon-launch-helper = Success

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • Microsoft Access 2010: How to Format Reports

    While Access 2010 provides of multitude of functionality, it is its easy to use nature that is perhaps even more impressive. It comes with an intuitive interface that allows you to take full control after playing around with the program for a bit and becoming acquainted with its features. Still, you may be completely new to the program and are looking for some guidance on how to execute certain tasks. That is what this tutorial intends to do, as we look at a few different options you have when it comes to formatting reports. So, before we jump into formatting a report, let's discuss some of...

    Read the article

  • How to use a zenity progress bar with cclive

    - by Winael
    I tried to use a zenity progress bar with cclive. I'm writting a script to download web videos files and I wanna see the progression of the download. But when I try something like $cclive <url> 2>&1 | zenity --progress But when I execute the command line but it not seems to work. Any idea of how I can do that ? BR, [Edit] cclive have this kind of output : cclive http://www.youtube.com/watch?v=youtubevideo Checking ... .......... ..........done. youtubevideo.flv 2.5M 75.8K/s 00:09:29 5% So I need to send the last part to sdout but I dont know how. Else and about pulsate, we can't see th progression with this option, and I really need it... So I will not using pulsate for this script.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >