Search Results

Search found 22668 results on 907 pages for 'command prompt'.

Page 833/907 | < Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >

  • reading newlines with FORMAT statement

    - by peter.murray.rust
    I'm writing a preprocessor and postprocessor for Fortran input and output using FORMAT-like statements (there are reasons not to use a FORTRAN library). I want to treat the new line ("/") character correctly. I don't have a Fortran compiler immediately to hand. Is there a simple algorithm for working out how many newlines are written or consumed (This post just gives reading examples) [Please assume a FORTRAN77-like mentality in the FORTRAN code and correct any FORTRAN syntax on my part] UPDATE: no comments yet so I am reduced to finding a compiler and running it myself. I'll post the answers if I'm not beaten to it. No-one commented I had the format syntax wrong. I've changed it but there may still be errors Assume datafile 1 a b c d etc... (a) does the READ command always consume a newline? does READ(1, '(A)') A READ(1, '(A)') B give A='a' and B='b' (b) what does READ(1,'(A,/)') A READ(1,'(A)') B give for B? (I would assume 'c') (c) what does READ(1, '(/)') READ(1, '(A)') A give for A (is it 'b' or 'c') (d) what does READ(1,'(A,/,A)') A, B READ(1,'(A)') C give for A and B and C(can I assume 'a' and 'b' and 'c') (e) what does READ(1,'(A,/,/,A)') A, B READ(1,'(A)') C give for A and B and C(can I assume 'a' and 'c' and 'd')? Are there any cases in which the '/' is redundant?

    Read the article

  • What Test Environment Setup do Top Project Committers Use in the Ruby Community?

    - by viatropos
    Today I am going to get as far as I can setting up my testing environment and workflow. I'm looking for practical advice on how to setup the test environment from you guys who are very passionate and versed in Ruby Testing. By the end of the day (6am PST?) I would like to be able to: Type one 1-command to run test suites for ANY project I find on Github. Run autotest for ANY Github project so I can fork and make TESTABLE contributions. Build gems from the ground up with Autotest and Shoulda. For one reason or another, I hardly ever run tests for projects I clone from Github. The major reason is because unless they're using RSpec and have a Rake task to run the tests, I don't see the common pattern behind it all. I have built 3 or 4 gems writing tests with RSpec, and while I find the DSL fun, it's less than ideal because it just adds another layer/language of methods I have to learn and remember. So I'm going with Shoulda. But this isn't a question about which testing framework to choose. So the questions are: What is your, the SO reader and Github project committer, test environment setup using autotest so that whenever you git clone a gem, you can run the tests and autotest-develop them if desired? What are the guys who are writing the Paperclip Tests and Authlogic Tests doing? What is their setup? Thanks for the insight. Looking for answers that will make me a more effective tester.

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • MySQL Database is Indexed at Apache Solr, How to access it via URL

    - by Wasim
    data-config.xml <dataConfig> <dataSource encoding="UTF-8" type="JdbcDataSource" driver="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/somevisits" user="root" password=""/> <document name="somevisits"> <entity name="login" query="select * from login"> <field column="sv_id" name="sv_id" /> <field column="sv_username" name="sv_username" /> </entity> </document> </dataConfig> schema.xml <?xml version="1.0" encoding="UTF-8" ?> <schema name="example" version="1.5"> <fields> <field name="sv_id" type="string" indexed="true" stored="true" required="true" multiValued="false" /> <field name="username" type="string" indexed="true" stored="true" required="true"/> <field name="_version_" type="long" indexed="true" stored="true" multiValued="false"/> <field name="text" type="string" indexed="true" stored="false" multiValued="true"/> </fields> <uniqueKey>sv_id</uniqueKey> <types> <fieldType name="string" class="solr.StrField" sortMissingLast="true" /> <fieldType name="long" class="solr.TrieLongField" precisionStep="0" positionIncrementGap="0"/> </types> </schema> Solr successfully imported mysql database using full http://[localSolr]:8983/solr/#/collection1/dataimport?command=full-import My question is, how to access that mysql imported database now?

    Read the article

  • Finding missing files by checksum

    - by grw
    Hi there, I'm doing a large data migration between two file systems (let's call them F1 and F2) on a Linux system which will necessarily involve copying the data verbatim into a differently-structured hierarchy on F2 and changing the file names. I'd like to write a script to generate a list of files which are in F1 but not in F2, i.e. the ones which weren't copied by the migration script into the new hierarchy, so that I can go back and migrate them manually. Unfortunately for reasons not worth going into, the migration script can't be modified to list files that it doesn't migrate. My question differs from this previously answered one because of the fact that I cannot rely on filenames as a comparison. I know the basic outline of the process would be: Generate a list of checksums for all files, recursing through F1 Do the same for F2 Compare the lists and generate a negative intersection of the checksums, ignoring the file names, to find files which are in F1 but not in F2. I'm kind of stuck getting past that stage, so I'd appreciate any pointers on which tools to use. I think I need to use the 'comm' command to compare the list of file checksums, but since md5sum, sha512sum and the like put the file name next to the checksum, I can't see a way to get it to bring me a useful comparison. Maybe awk is the way to go? I'm using Red Hat Enterprise Linux 5.x. Thanks.

    Read the article

  • PHP DELETE immediately after select

    - by teehoo
    I have a PHP server script that SELECTs some data from a MySQL database. As soon as I have the result from mysql_query and mysql_fetch_assoc stored in my own local variables, I want to delete the row I just selected. The problem with this approach is that it seems that PHP has done pass-by-reference to my local variables instead of pass-by-value, and my local variables become undefined after the delete command. Is there anyway to get around this? Here is my code: $query="SELECT id, peerID, name FROM names WHERE peer = $userID AND docID = '$docID' AND seqNo = $nid"; $result = mysql_query($query); if (!$result) self::logError("FAIL:1 getUsersNamesUpdate() query: ".$query."\n"); if (mysql_num_rows($result) == 0) return array(); $row = mysql_fetch_assoc($result); $result = array(); $result["id"] = $row["id"]; $result["peerID"] = $row["peerID"]; $result["name"] = $row["name"]; $query="DELETE FROM names WHERE id = $result[id];"; $result = mysql_query($query); if (!$result) self::logError("FAIL:2 getUsersNamesUpdate() query: ".$query."\n"); return $result;

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Can't get java progam to run! NoClassDefFoundError?

    - by mcintyre321
    I'm a .NET developer, but for my current project I need to use Google Caja, a Java project. Uh-oh! I've followed the guide at http://code.google.com/p/google-caja/wiki/RunningCaja on my windows machine, but can't get the program to run. The command line they suggest didn't word, so I cd'd into the ant-jars directory and tried to run plugin.jar: D:\java\caja\svn-changes\pristine\ant-jars>java -cp . -jar pluginc.jar -i test.htm Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/cli/ParseException at com.google.caja.plugin.PluginCompilerMain.<init>(PluginCompilerMain.java:78) at com.google.caja.plugin.PluginCompilerMain.main(PluginCompilerMain.java:368) Caused by: java.lang.ClassNotFoundException: org.apache.commons.cli.ParseException at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) ... 2 more Whats that all about? I've also tried file:///d:/java/caja/svn-changes/pristine/ant-jars/test.htm instead of test.htm as looking at the source it seems the file param is a Uri... I've also tried running IKVM on pluginc and then not worrying about java, but that came up with the ClassDefNotFoundException too... thanks!

    Read the article

  • Auto Submitting a form (cURL)

    - by cast01
    Im writing a form to post data off to paypal, and this works fine, i create the form with hidden fields and then have a submit button that submits everything to paypal. However, when the user clicks that button, there is more that i want to do, for example change their cart status in the database. So, i want to be able to execute some code when they click submit and then post the data to paypal. I dont want to use javascript for this. My method at the moment is using cURL, the form posts back to the current page, i then check for $_POST data, do my other commands like updating the status of the cart, and then create a curl command, and post the form data to paypal. Now, it gets posted successfully, but the browser doesnt go off to paypal. At first i was just retirieving the result in a string and then echoing it out and i could see that the post was successful, then i set CURLOPT_FOLLOWLOCATION to 1 assuming this would let the browser go off to paypal but it doesnt, what it seems to do is grab some stuff from paypal and put it on my page. Is using cURL the right thing for this? and if so, how do i get round this problem? I want to stay away from javascript for this as only users with javascript enabled would have their cart updated. The code im using for curl is below, the post_data is an array i created and then read key-value pairs into post_string. //Set cURL Options curl_setopt($curl_connection, CURLOPT_CONNECTTIMEOUT, 30); curl_setopt($curl_connection, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)"); curl_setopt($curl_connection, CURLOPT_RETURNTRANSFER, false); curl_setopt($curl_connection, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($curl_connection, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($curl_connection, CURLOPT_MAXREDIRS, 20); //Set Data to be Posted curl_setopt($curl_connection, CURLOPT_POSTFIELDS, $post_string); //Execute Request curl_exec($curl_connection);

    Read the article

  • Undesired Output of Crontab Job Using CURL

    - by Russell C.
    I have written a perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress? Thanks in advance for your help!

    Read the article

  • Passing a outside variable into a <asp:sqldatasource> tag. ASP.NET 2.0

    - by MadMAxJr
    I'm designing some VB based ASP.NET 2.0, and I am trying to make more use of the various ASP tags that visual studio provides, rather than hand writing everything in the code-behind. I want to pass in an outside variable from the Session to identify who the user is for the query. <asp:sqldatasource id="DataStores" runat="server" connectionstring="<%$ ConnectionStrings:MY_CONNECTION %>" providername="<%$ ConnectionStrings:MY_CONNECTION.ProviderName %>" selectcommand="SELECT THING1, THING2 FROM DATA_TABLE WHERE (THING2 IN (SELECT THING2 FROM RELATED_DATA_TABLE WHERE (USERNAME = @user)))" onselecting="Data_Stores_Selecting"> <SelectParameters> <asp:parameter name="user" defaultvalue ="" /> </SelectParameters> </asp:sqldatasource> And on my code behind I have: Protected Sub Data_Stores_Selecting(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.SqlDataSourceSelectingEventArgs) Handles Data_Stores.Selecting e.Command.Parameters("user").Value = Session("userid") End Sub Oracle squaks at me with ORA-01036, illegal variable name. Am I declaring the variable wrong in the query? I thought external variables share the same name with a @ prefixed. from what I understand, this should be placing the value I want into the query when it executes the select. EDIT: Okay, thanks for the advice so far, first error was corrected, I need to use : and not @ for the variable declaration in the query. Now it generates an ORA-01745 invalid host/bind variable name. EDIT AGAIN: Okay, looks like user was a reserved word. It works now! Thanks for other points of view on this one. I hadn't thought of that approach.

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • How to handle build rule with unknown targets in OMake when target list generator is built

    - by Michael E
    I have a project which uses OMake for its build system, and I am trying to handle a rather tough corner case. I have some definition files and a tool which can take these definition files and create GraphViz files. There are two problems, though: Each definition file can produce multiple graphs, and the list of graphs it can produce is encoded in the file. My dump tool does have a -list option which lists all the graphs a definition file will produce. This dump tool is built in the source tree. I want this list available in the OMakefile so I can use other rules to convert the DOT files to SVG, and have a phony target depend on all the SVGs (goal: a single build command which builds SVG descriptions of all my graphs). If I only had the first problem, it would be easy - I would run the tool to build a list, and then use that list to build a target which invokes the dumper to output the GraphViz files. However, I am rather stuck on forcing the dump tool to be built before it is needed. If this were make, I would just run make recursively to build the dump tool. OMake does not allow recursive invocation, however, and the build function is only usable from osh. Any suggestions for a good solution to this problem?

    Read the article

  • Question about using an access database as a resource file in Visual Studio.

    - by user354303
    Hi I am trying to embed a Microsoft Access database file into my Class assembly DLL. I want my code to reference the resource file and use it with a ADODB.Connection object. Any body know a simpler way, or an easier way? Or what is wrong with my code, when i added the resource file it added me dataset definitions, but i have no idea what to do with those. The connection string I am trying below is from an automatically generated app.config. I did add the item as a resource... using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Data; using ConsoleApplication1.Resources;//SPPrinterLicenses using System.Data.OleDb; using ADODB; using System.Configuration; namespace ConsoleApplication1 { class SharePointPrinterManager { public static bool IsValidLicense(string HardwareID) { OleDbDataAdapter da = new OleDbDataAdapter(); DataSet ds = new DataSet(); ADODB.Connection adoCn = new Connection(); ADODB.Recordset adoRs = new Recordset(); //**open command below fails** adoCn.Open( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\Resources\SPPrinterLicenses.accdb;Persist Security Info=True", "", "", 1); adoRs.Open("Select * from AllWorkstationLicenses", adoCn, ADODB.CursorTypeEnum.adOpenForwardOnly, ADODB.LockTypeEnum.adLockReadOnly, 1); da.Fill(ds, adoRs, "AllworkstationLicenses"); adoCn.Close(); DataTable dt = new DataTable(); //ds.Tables. return true; } } }

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • Chrome extension javascript array bug?

    - by Wayne Werner
    Hi, I'm working on a Google Chrome extension. In the popup I have the following code: var bookmarks = []; function appendBMTnode(node){ bookmarks.push([node[0].title, node[0].id]); } function addchildren(results){ for(x = 0; x < results.length; x++){ bookmarks.push([results[x].title, results[x].id]); chrome.bookmarks.getChildren(results[x].id, addchildren); } } function getallbookmarks(){ chrome.bookmarks.get('0', appendBMTnode); chrome.bookmarks.getChildren('0', addchildren); } console.debug(bookmarks.length); console.debug(bookmarks); Now, I would assume that the first command would issue the # of bookmarks I have. Indeed, when I use Chrome's debugger and add bookmarks.length to the watch list, 418 is the value. In the console of the debugger I can write bookmarks.length and it will give me the correct length. I can type for(x = 0; x < bookmarks.length; x++){ console.debug(bookmarks[x]); } and I get string representations of each inner array. However, that original console.debug(bookmarks.length) gives an output of zero. And if I add console.debug(bookmarks[0]); to the popup.html it tells me that the value is undefined. This seems like a bug to me, but my real question is how can I iterate over this list? Thanks

    Read the article

  • Visual Studio 2012 won't start

    - by David Aleu
    I installed VS2012 Premium from our MSDN subscription and it was working fine the first couple of days but then I installed a few extensions I can't now start VS2012 and it gives the error: Faulting application name: devenv.exe, version: 11.0.50727.1, time stamp: 0x5011ecaa Faulting module name: ntdll.dll, version: 6.1.7601.17725, time stamp: 0x4ec49b8f Exception code: 0xc0000374 Fault offset: 0x000ce6c3 Faulting process id: 0xee8 Faulting application start time: 0x01cd89bb777fc1dd Faulting application path: C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll I'm running it on Windows 7 64 bit. I've tried to repair, uninstall and install again and nothing. I tried to restore to a previous restore system point but nothing. The extensions I installed I can remember: VS10x Code Map VSCommands Visual SVN Nuget manager (all the above my colleagues have it too and it works fine for them) and: Web Essentials Visual Studio Color Theme Editor SlowCheetah Mobile Ready HTML5 Questions are: Anyone else has had this problem? Is there a way I can uninstall extensions from a command line or software? (I removed the extensions folder but that doesn't do anything) Can I repair the "C:\Windows\SysWOW64\ntdll.dll"? Is it really a problem with this dll? I haven't been able to find any similar issue in other versions and because VS2012 is new doesn't seem to be much information either.

    Read the article

  • Sql Server 2005 multiple insert with c#

    - by bottlenecked
    Hello. I have a class named Entry declared like this: class Entry{ string Id {get;set;} string Name {get;set;} } and then a method that will accept multiple such Entry objects for insertion into the database using ADO.NET: static void InsertEntries(IEnumerable<Entry> entries){ //build a SqlCommand object using(SqlCommand cmd = new SqlCommand()){ ... const string refcmdText = "INSERT INTO Entries (id, name) VALUES (@id{0},@name{0});"; int count = 0; string query = string.Empty; //build a large query foreach(var entry in entries){ query += string.Format(refcmdText, count); cmd.Parameters.AddWithValue(string.Format("@id{0}",count), entry.Id); cmd.Parameters.AddWithValue(string.Format("@name{0}",count), entry.Name); count++; } cmd.CommandText=query; //and then execute the command ... } } And my question is this: should I keep using the above way of sending multiple insert statements (build a giant string of insert statements and their parameters and send it over the network), or should I keep an open connection and send a single insert statement for each Entry like this: using(SqlCommand cmd = new SqlCommand(){ using(SqlConnection conn = new SqlConnection(){ //assign connection string and open connection ... cmd.Connection = conn; foreach(var entry in entries){ cmd.CommandText= "INSERT INTO Entries (id, name) VALUES (@id,@name);"; cmd.Parameters.AddWithValue("@id", entry.Id); cmd.Parameters.AddWithValue("@name", entry.Name); cmd.ExecuteNonQuery(); } } } What do you think? Will there be a performance difference in the Sql Server between the two? Are there any other consequences I should be aware of? Thank you for your time!

    Read the article

  • What problem did MS solve by creating PowerShell? [closed]

    - by Fred
    I'm asking because PowerShell confuses me. I've been trying to write some deployment scripts using PowerShell and I've been less than enthused by the result. I have a co-worker who loves PowerShell and defends it at every turn. Said co-worker claims PowerShell was never written to be a strong shell, but instead was written to: a) Allow you to peek and poke at .NET assemblies on the command-line (why is this a reason for PowerShell to exist?) b) To be hosted in .NET applications for automation, similar to DCOP in KDE and how Gnome is using CORBA. c) to be treated as ".NET script" rather than as an actual shell (related to b). I've always felt like Windows was missing a decent way to bang out automation scripts. cmd is too simplistic in many cases, and WSH is too obtuse (although the combination can be used successfully, I'm not a fan). When I first heard about PowerShell I felt like Windows was finally getting a decent shell that would be able to help with automation of many tasks, but recent experiences, and my co-worker, tell me otherwise. To be clear, I don't take issue with the fact that it's built on .NET, or that it passes objects around rather than text (despite my Unix background :]), and I'm not arguing that PowerShell is useless, but from what I can see, it doesn't solve the problem I was hoping it would solve very well. As soon as you step outside of the .NET/Powershell world, things quit being nice and cozy for you. So with all that out of the way, what problem did MS solve by creating PowerShell, or is it some political bastard child as I suspect? I've googled and haven't hit upon anything that sufficiently answered that for me, but the more citations the better.

    Read the article

  • How do I programmatically run all the JUnit tests in my Java application?

    - by Andrew McKinlay
    From Eclipse I can easily run all the JUnit tests in my application. I would like to be able to run the tests on target systems from the application jar, without Eclipse (or Ant or Maven or any other development tool). I can see how to run a specific test or suite from the command line. I could manually create a suite listing all the tests in my application, but that seems error prone - I'm sure at some point I'll create a test and forget to add it to the suite. The Eclipse JUnit plugin has a wizard to create a test suite, but for some reason it doesn't "see" my test classes. It may be looking for JUnit 3 tests, not JUnit 4 annotated tests. I could write a tool that would automatically create the suite by scanning the source files. Or I could write code so the application would scan it's own jar file for tests (either by naming convention or by looking for the @Test annotation). It seems like there should be an easier way. What am I missing?

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • C: Fifo between threads, writing and reading strings

    - by Yonatan
    Hello once more dear internet, I writing a small program that among other things, writes a log file of commands received. to do that, I want to use a thread that all it should do is just attempt to read from a pipe, while the main thread will write into that pipe whenever it should. Since i don't know the length of each string command, i thought about writing and reading the pointer to the char buf[MAX_MESSAGE_LEN]. Since what i've tried so far doesn't work, i'll post my best effort :P char str[] = "hello log thread 123456789 10 11 12 13 14 15 16 17 18 19\n"; if (pipe(pipe_fd) != 0) return -1; pthread_t log_thread; pthread_create(&log_thread,NULL, log_thread_start, argv[2]); success_write = 0; do { write(pipe_fd[1],(void*)&str,sizeof(char*)); } while (success_write < sizeof(char*)); and the thread does this: char buffer[MAX_MSGLEN]; int success_read; success_read = 0; //while(1) { do { success_read += read(pipe_fd[0],(void*)&buffer, sizeof(char*)); } while (success_read < sizeof(char*)); //} printf("%s",buffer); (Sorry if this doesn't indent, i can't seem to figure out this editor...) oh, and pipe_fd[2] is a global parameter. So, any help with this, either by the way i thought of, or another way i could read strings without knowing the length, would be much appreciated. On a side note, i'm working on Eclipse IDE C/C++, version 1.2.1 and i can't seem to set up the compiler so it will link the pthread library to my project. I've resorted to writing my own Makefile to make it (pun intended :P) work. Anyone knows what to do ? i've looked online, but all i find are solutions that are probably good on an older version because the tabs and option keys are different. Anyways, Thanks a bunch internet ! Yonatan

    Read the article

  • How to solve this problem with Python

    - by morpheous
    I am "porting" an application I wrote in C++ into Python. This is the current workflow: Application is started from the console Application parses CLI args Application reads an ini configuration file which specifies which plugins to load etc Application starts a timer Application iterates through each loaded plugin and orders them to start work. This spawns a new worker thread for the plugin The plugins carry out their work and when completed, they die When time interval (read from config file) is up, steps 5-7 is repeated iteratively Since I am new to Python (2 days and counting), the distinction between script, modules and packages are still a bit hazy to me, and I would like to seek advice from Pythonista as to how to implement the workflow described above, using Python as the programing language. In order to keep things simple, I have decided to leave out the time interval stuff out, and instead run the python script/scripts as a cron job instead. This is how I am thinking of approaching it: Encapsulate the whole application in a package which is executable (i.e. can be run from the command line with arguments. Write the plugins as modules (I think maybe its better to implement each module in a separate file?) I havent seen any examples of using threading in Python yet. Could someone provide a snippet of how I could spawn a thread to run a module. Also, I am not sure how to implement the concept of plugins in Python - any advice would be helpful - especially with a code snippet.

    Read the article

  • jquery with php loading file

    - by Marcus Solv
    I'm trying to use jquery with a simple php code: $('#some').click(function() { <?php require_once('some1.php?name="some' + index + '"'); ?> }); It shows no error, so I don't know what is wrong. In some1 I have: <?php //Start session session_start(); //Include database connection details require_once('../sql/config.php'); //Connect to mysql server $link = mysql_connect(DB_HOST, DB_USER, DB_PASSWORD); if(!$link) { die('Failed to connect to server: ' . mysql_error()); } //Select database $db = mysql_select_db(DB_DATABASE); if(!$db) { die("Unable to select database"); } //Function to sanitize values received from the form. Prevents SQL injection function clean($str) { $str = @trim($str); if(get_magic_quotes_gpc()) { $str = stripslashes($str); } return mysql_real_escape_string($str); } //Sanitize the POST values $name = clean($_GET['name']); ?> It's not complete because I want to make a sql command (insert). I want when I click in #some to execute that file (create a entry in the table that isn't define yet).

    Read the article

  • Ajax and JSF 1.1 using hidden iframe with "proxy forms", what do you think about this development st

    - by Steel Plume
    Hi, currently I am using yet 1.1 specs, so I am trying to make simple what is too complex for me :p, managing backing beans with conflicting navigation rules, external params breaking rules and so on... for example when I need a backing bean used by other "views" simply I call it using FacesContext inside other backing beans, but often it's too wired up to JSF navigation/initialization rules to be really usable, and of course more simple is more useful become the FacesContext. So with only a bit of cross browser Javascript (simply a form copy and a read-write on a "proxy" form), I create a sort of proxy form inside the main user page (totally disassociated from JSF navigation rules, but using JSF taglibs). Ajax gives me flexibility on the user interaction, but data is always managed by JSF. Pratically I demand all "fictious" user actions to an hidden "iframe" which build up all needed forms according JSF rules, then a javascript simply clone its form output and put it into the user view level (CSS for showing/hiding real command buttons and making pretty), the user plays around and when he click submit, a script copies all "proxied" form values into the real JSF form inside the "iframe" that invokes the real submit of the form, what it returns is obviously dependent by your choice. Now JSF is really a pleasure :-p My real interest is to know what are your alternative strategy for using pure Ajax and JSF 1.1 without adopting middle layer like ajax4jsf and others, all good choices but too much "plugins" than specs.

    Read the article

< Previous Page | 829 830 831 832 833 834 835 836 837 838 839 840  | Next Page >