Search Results

Search found 24915 results on 997 pages for 'ordered test'.

Page 825/997 | < Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >

  • CIL and JVM Little endian to big endian in c# and java

    - by Haythem
    Hello, I am using on the client C# where I am converting double values to byte array. I am using java on the server and I am using writeDouble and readDouble to convert double values to byte arrays. The problem is the double values from java at the end are not the double values at the begin giving to c# writeDouble in Java Converts the double argument to a long using the doubleToLongBits method , and then writes that long value to the underlying output stream as an 8-byte quantity, high byte first. DoubleToLongBits Returns a representation of the specified floating-point value according to the IEEE 754 floating-point "double format" bit layout. The Program on the server is waiting of 64-102-112-0-0-0-0-0 from C# to convert it to 1700.0 but he si becoming 0000014415464 from c# after c# converted 1700.0 this is my code in c#: class User { double workingStatus; public void persist() { byte[] dataByte; using (MemoryStream ms = new MemoryStream()) { using (BinaryWriter bw = new BinaryWriter(ms)) { bw.Write(workingStatus); bw.Flush(); bw.Close(); } dataByte = ms.ToArray(); for (int j = 0; j < dataByte.Length; j++) { Console.Write(dataByte[j]); } } public double WorkingStatus { get { return workingStatus; } set { workingStatus = value; } } } class Test { static void Main() { User user = new User(); user.WorkingStatus = 1700.0; user.persist(); } thank you for the help.

    Read the article

  • In Ruby, how to implement global behaviour?

    - by Gordon McAllister
    Hi all, I want to implement the concept of a Workspace. This is a global concept - all other code will interact with one instance of this Workspace. The Workspace will be responsible for maintaining the current system state (i.e. interacting with the system model, persisting the system state etc) So what's the best design strategy for my Workspace, bearing in mind this will have to be testable (using RSpec now, but happy to look at alternatives). Having read thru some open source projects out there and I've seen 3 strategies. None of which I can identify as "the best practice". They are: Include the singleton class. But how testable is this? Will the global state of Workspace change between tests? Implemented all behaviour as class methods. Again how do you test this? Implemented all behaviour as module methods. Not sure about this one at all! Which is best? Or is there another way? Thanks, Gordon

    Read the article

  • IIS6 configuration for WCF/Silverlight

    - by Grayson Mitchell
    I am trying the simple senario of running a WCF service to return Active directory information on a user. (http://rouslan.com/2009/03/20-steps-to-get-together-windows-authentication-silverlight-and-wcf-service/) using Silverlight 4 & .net 4 However, I am being driven insane by trying to set this up in IIS. Currently I have my solution working in VS, but when I try to run the service in ISS a debug window tries to open... (and I can't get rid of it, is is complaining about the WCF call). <basicHttpBinding> <binding name="winAuthBasicHttpBinding"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Ntlm"/> </security> </binding> </basicHttpBinding> The Insanity: I have got the IIS to successfully call a WCF service (but can't reproduce), I have created 5 projects to try and get this working, but in my 5th I can't even browse the site (says it can't download silverlight application, but mime type are setup). My next step is to install Server2008 on a test machine and try IIS7... as all the various walkthrough's I have found just dont seem to work in IIS6.

    Read the article

  • Dashcode code translation

    - by Alex Mcp
    Hi, a quick, probably easy question whose answer is probably "best practice" I'm following a tutorial for a custom-template mobile Safari webapp, and to change views around this code is used: function btnSave_ClickHandler(event) { var views = document.getElementById('stackLayout'); var front = document.getElementById('mainScreen'); if (views && views.object && front) { views.object.setCurrentView(front, true); } } My question is just about the if conditional statement. What is this triplet saying, and why do each of those things need to be verified before the view can be changed? Does views.object just test to see if the views variable responds to the object method? Why is this important? EDIT - This is/was the main point of this question, and it regards not Javascript as a language and how if loops work, but rather WHY these 3 things specifically need to be checked: Under what scenarios might views and front not exist? I don't typically write my code so redundantly. If the name of my MySQL table isn't changing, I'll just say UPDATE 'mytable' WHERE... instead of the much more verbose (and in my view, redundant) $mytable = "TheSQLTableName"; if ($mytable == an actual table && $mytable exists && entries can be updated){ UPDATE $mytable; } Whereas if the table's name (or in the JS example, the view's names) ARE NOT "hard coded" but are instead a user input or otherwise mutable, I might right my code as the DashCode example has it. So tell me, can these values "go wrong" anyhow? Thanks!

    Read the article

  • Sorting an array of Doubles with NaN in it

    - by Agent Worm
    This is more of a 'Can you explain this' type of question than it is anything else. I came across a problem at work where we were using NaN values in a table, but when the table was sorted, it came out in a very strange, strange manner. I figured NaN was mucking up something so I wrote up a test application to see if this is true. This is what I did. static void Main(string[] args) { double[] someArray = { 4.0, 2.0, double.NaN, 1.0, 5.0, 3.0, double.NaN, 10.0, 9.0, 8.0 }; foreach (double db in someArray) { Console.WriteLine(db); } Array.Sort(someArray); Console.WriteLine("\n\n"); foreach (double db in someArray) { Console.WriteLine(db); } Console.ReadLine(); } Which gave the result: Before: 4,2,NaN,1,5,3,NaN,10,9,8 After: 1,4,NaN,2,3,5,8,9,10,NaN So yes, the NaN some how made the sorted array to be sorted in a strange way. To quote Fry; "Why is those things?"

    Read the article

  • ASP.NET MVC View Engine Resolution Sequence

    - by intangible02
    I created a simple ASP.NET MVC version 1.0 application. I have a ProductController which has one action Index. In the view, I created a corresponding Index.aspx under Product subfolder. Then I referenced the Spark dll and created Index.spark under the same Product view folder. The Application_Start looks like protected void Application_Start() { RegisterRoutes(RouteTable.Routes); ViewEngines.Engines.Clear(); ViewEngines.Engines.Add(new Spark.Web.Mvc.SparkViewFactory()); ViewEngines.Engines.Add(new WebFormViewEngine()); } My expectation is that since the Spark engine registers before default WebFormViewEngine, when browse the Index action in Product controller, the Spark engine should be used, and WebFormViewEngine should be used for all other urls. However, the test shows that the Index action for Product controller also uses the WebFormViewEngine. If I comment out the registration of WebFormViewEnginer (the last line in the code), I can see that the Index action is rendered by Spark engine and the rest urls generates an error (since the defualt engine is gone), it proves that all my Spark code is correct. Now my question is how the view engine is resolved? Why the registration sequence does not take effect?

    Read the article

  • Uncatchable AccesViolationException

    - by Roy
    Hi all, I'm getting close to desperate.. I am developing a field service application for Windows Mobile 6.1 using C# and quite some p/Invoking. (I think I'm referencing about 50 native functions) On normal circumstances this goes without any problem, but when i start stressing the GC i'm getting a nasty 0xC0000005 error witch seems uncatchable. In my test i'm rapidly closing and opening a dialog form (the form did make use of native functions, but for testing i commented these out) and after a while the Windows Mobile error reporter comes around to tell me that there was an fatal error in my application. My code uses a try-catch around the Application.Run(masterForm); and hooks into the CurrentDomain.UnhandledException event, but the application still crashes. Even when i attach the debugger, visual studio just tells me "The remote connection to the device has been lost" when the exception occurs.. Since I didn't succeed to catch the exception in the managed environment, I tried to make sense out of the Error Reporter log file. But this doesn't make any sense, the only consistent this about the error is the application where it occurs in. The thread where the application occurs in is unknown to me, the module where the error occurs differs from time to time (I've seen my application.exe, WS2.dll, netcfagl3_5.dll and mscoree3_5.dll), even the error code is not always the same. (most of the time it's 0xC0000005, but i've also seen an 0X80000002 error, which is a warning accounting the first byte?) I tried debugging through bugtrap, but strangely enough this crashes with the same error code (0xC0000005). I tried to open the kdmp file with visual studio, but i can't seem to make any sense out of this because it only shows me disassembler code when i step into the error (unless i have the right .pbb files, which i don't). Same goes for WinDbg. To make a long story short: I frankly don't have a single clue where to look for this error, and I'm hoping some bright soul on stackoverflow does. I'm happy to provide some code but at this moment I don't know which piece to provide.. Any help is greatly appreciated!

    Read the article

  • MSMQ on Win2008 R2 won’t receive messages from older clients

    - by Graffen
    Hi all I'm battling a really weird problem here. I have a Windows 2008 R2 server with Message Queueing installed. On another machine, running Windows 2003 is a service that is set up to send messages to a public queue on the 2008 server. However, messages never show up on the server. I've written a small console app that just sends a "Hello World" message to a test queue on the 2008 machine. Running this app on XP or 2003 results in absolutely nothing. However, when I try running the app on my Windows 7 machine, a message is delivered just fine. I've been through all sorts of security settings, disabled firewalls on all machines etc. The event log shows nothing of interest, and no exceptions are being thrown on the clients. Running a packet sniffer (WireShark) on the server reveals only a little. When trying to send a message from XP or 2003 I only see an ICMP error "Port Unreachable" on port 3527 (which I gather is an MQPing packet?). After that, silence. Wireshark shows a nice little stream of packets when I try from my Win7 client (as expected - messages get delivered just fine from Win7). I've enabled MSMQ End2End logging on the server, but only entries from the messages sent from my Win7 machine are appearing in the log. So somehow it seems that messages are being dropped silently somewhere along the route from XP or 2003 to my 2008 server. Does anyone have any clues as to what might be causing this mysterious behaviour? -- Jesper

    Read the article

  • fsockopen soap request

    - by gosom
    I am trying to send a SOAP message to a service using php. I want to do it with fsockopen, here's is the code : <?php $fp = @fsockopen("ssl://xmlpropp.worldspan.com", 443, $errno, $errstr); if (!is_resource($fp)) { die('fsockopen call failed with error number ' . $errno . '.' . $errstr); } $soap_out = "POST /xmlts HTTP/1.1\r\n"; $soap_out .= "Host: 212.127.18.11:8800\r\n"; //$soap_out .= "User-Agent: MySOAPisOKGuys \r\n"; $soap_out .= "Content-Type: text/xml; charset='utf-8'\r\n"; $soap_out .= "Content-Length: 999\r\n\r\n"; $soap_put .= "Connection: close\r\n"; $soap_out .= "SOAPAction:\r\n"; $soap_out .= ' Worldspan This is a test '; if(!fputs($fp, $soap_out, strlen($soap_out))) echo "could not write"; echo "<xmp>".$soap_out."</xmp>"; echo "--------------------<br>"; while (!feof($fp)) { $soap_in .= fgets($fp, 100); } echo "<xmp>$soap_in</xmp>"; fclose($fp); echo "ok"; the above code just hangs . if i remove the while it types ok, so i suppose the problem is at $soap_in .= fgets($fp, 100) Any ideas of what is happening

    Read the article

  • php parsing xml result from ipb ssi tool

    - by Sir Troll
    Last week my code was running fine and now (without changing anything) it is no longer able to parse the elements out of the XML. The response from the ssi tool: <?xml version="1.0" encoding="ISO-8859-1"?> <ipbsource><topic id="32"> <title>Test topic</title> <lastposter id="1">Drake</lastposter> <starter id="18">Drake</starter> <forum id="3">Updates</forum> <date timestamp="1345600720">22 August 2012 - 03:58 AM</date> </topic> </ipbsource> enter code here Update: Switched to SimpleXML but I can't extract data from the xml: $xml = file_get_contents('http://site.com/forum/ssi.php?a=out&f=2&show=10&type=xml'); $xml = new SimpleXMLElement($xml); $item_array = array(); var_dump($xml); foreach($xml->topic as $el) { var_dump($el); echo 'Title: ' . $el->title; } The var_dump output: object(SimpleXMLElement)#1 (1) { [0]=> string(1) " " }

    Read the article

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • VS2008 adding SQL Server Database (SQL Server 2008 Mgmt Studio) not working

    - by Kahn
    I'm trying to practice using the ASP.Net MVC at home, but I ran into an impossible problem. I cannot open a connection to SQL Server 2008, I get this error: "Connections to SQL Server files (*.mdf) require SQL Server Express 2005 to function properly. ..." I've googled around for numerous responses, none of them either working or addressing this issue. I'm running Vista 32bit, my SQL Server 2008 Mgmt Studio is also 32bit, I have SP1 installed both on VS2008 Professional, as well as the SQL Server. I changed the machine.config connectionStrings from ./SQLExpress to my SQL Server 2008 name. Now if I connect manually through web.config, in an asp:datasource or code-behind, everything works fine. But for some reason trying to add a DB Connection directly like this always gets the error. This is pretty fatal, since I can't rightly do much unless I can use LINQ to SQL with my MVC test project, and this is the only way I know how. Worked fine in school and work, but not at home. Installing SQL Server Express 2005, as some have suggested, is not an option. Obviously it HAS to work with SQL Server 2008. Thanks in advance.

    Read the article

  • Generate exe in .Net

    - by rwallace
    In .Net, you can generate byte code in memory, and presumably save the resulting program to a .exe file. To do the first step, I have the following test code adapted from http://www.code-magazine.com/Article.aspx?quickid=0301051 var name = new AssemblyName(); name.Name = "MyAssembly"; var ad = Thread.GetDomain(); var ab = ad.DefineDynamicAssembly(name, AssemblyBuilderAccess.Run); var mb = ab.DefineDynamicModule("MyModule"); var theClass = mb.DefineType("MathOps", TypeAttributes.Public); var retType = typeof(System.Int32); var parms = new Type[2]; parms[0] = typeof(System.Int32); parms[1] = typeof(System.Int32); var meb = theClass.DefineMethod("ReturnSum", MethodAttributes.Public, retType, parms); var gen = meb.GetILGenerator(); gen.Emit(OpCodes.Ldarg_1); gen.Emit(OpCodes.Ldarg_2); gen.Emit(OpCodes.Add_Ovf); gen.Emit(OpCodes.Stloc_0); gen.Emit(OpCodes.Br_S); gen.Emit(OpCodes.Ldloc_0); gen.Emit(OpCodes.Ret); theClass.CreateType(); How do you do the second step, and save the result to a .exe?

    Read the article

  • simple jquery fetch from mysql

    - by JPro
    I am trying to use jQuery with MYSQL and I wrote something like this : <html> <head> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js"></script> <script> function example_ajax_request() { $('#example-placeholder').html('<p>Loading results ... <img src="ajax-loader.gif" /></p>'); $('#example-placeholder').load("loadres.php"); } </script> </head> <body> <div id="query"> <select name="show" id="box" > <option value="0">Select A Test</option> <option value="All">--All--</option> <option value="M1">Model1</option> </select> <input type="button" onclick="example_ajax_request()" value="Click Me!" /> </div> <div id="example-placeholder"> <p>Placeholding text</p> </div></body> </html> Basically I want to pass parameters to the loadres.php file. But unable to figure out the exact way to do. Any help is appreciated. Thanks.

    Read the article

  • Bash script to log in over SSH

    - by user1042928
    Sorry if this seems like a dumb question but I am just learning bash scripting. For a school project we need to code an RTX that runs in Unix. It runs as a process in terminal and takes in user input and then prints it to the screen. I want to write a bash script to test that it can respond to lots of quick user input without overflowing or failing. My main problem is that once the RTX starts the bash script will stop on that line until the RTX terminates and only then print the loop to the terminal (instead of printing it to the RTX prompt as I intend it to). I have tried running the RTX in the background but that didn't work. I need to find a way to redirect input to the RTX while it is still running with a bash script. Google searches didn't come up with examples that I understood/could adapt. Any help is appreciated. #!/bin/bash # declare STRING variable STRING="RTX Worked =D" #Start the rtx in a new process, stuck on this line until rtx terminates. ./main #Somehow redirect io to the rtx. for i in `seq 1 100`; do echo $i echo " \n" done echo $STRING

    Read the article

  • Application doesn't exit with 0 threads

    - by Bryce Wagner
    We have a WinForms desktop application, which is heavily multithreaded. 3 threads run with Application.Run and a bunch of other background worker threads. Getting all the threads to shut down properly was kind of tricky, but I thought I finally got it right. But when we actually deployed the application, users started experiencing the application not exiting. There's a System.Threading.Mutex to prevent them from running the app multiple times, so they have to go into task manager and kill the old one before they can run it again. Every thread gets a Thread.Join before the main thread exits, and I added logging to each thread I spawn. According to the log, every single thread that starts also exits, and the main thread also exits. Even stranger, running SysInternals ProcessExplorer show all the threads disappear when the application exits. As in, there are 0 threads (managed or unmanaged), but the process is still running. I can't reproduce this on any developers computers or our test environment, and so far I've only seen it happen on Windows XP (not Vista or Windows 7 or any Windows Server). How can a process keep running with 0 threads?

    Read the article

  • Using Regex Replace when looking for un-escaped characters

    - by Daniel Hollinrake
    I've got a requirement that is basically this. If I have a string of text such as "There once was an 'ugly' duckling but it could never have been \'Scarlett\' Johansen" then I'd like to match the quotes that haven't already been escaped. These would be the ones around 'ugly' not the ones around 'Scarlett'. I've spent quite a while on this using a little C# console app to test things and have come up with the following solution. private static void RegexFunAndGames() { string result; string sampleText = @"Mr. Grant and Ms. Kelly starred in the film \'To Catch A Thief' but not in 'Stardust' because they'd stopped acting by then"; string rePattern = @"\\'"; string replaceWith = "'"; Console.WriteLine(sampleText); Regex regEx = new Regex(rePattern); result = regEx.Replace(sampleText, replaceWith); result = result.Replace("'", @"\'"); Console.WriteLine(result); } Basically what I've done is a two step process find those characters that have already been escaped, undo that then do everything again. It sounds a bit clumsy and I feel that there could be a better way.

    Read the article

  • ARMv6 FIQ, acknowledge interrupt

    - by fastmonkeywheels
    I'm working with an i.mx35 armv6 core processor. I have Interrupt 62 configured as a FIQ with my handler installed and being called. My handler at the moment just toggles an output pin so I can test latency with a scope. With the code below, once I trigger the FIQ it continues forever as fast as it can, apparently not being acknowledged. I'm triggering the FIQ by means of the Interrupt Force Register so I'm assured that the source isn't triggering it this fast. If I disable Interrupt 62 in the AVIC in my FIQ routine the interrupt only triggers once. I have read the sections on the VIC Port in the ARM1136JF-S and ARM1136J-S Technical Reference Manual and it covers proper exit procedure. I'm only having one FIQ handler so I have no need to branch. The line that I don't understand is: STR R0, [R8,#AckFinished] I'm not sure what AckFinished is supposed to be or what this command is supposed to do. My FIQ handler is below: ldr r9, IOMUX_ADDR12 ldr r8, [r9] orr r8, #0x08 @ top LED str r8,[r9] @turn on LED bic r8, #0x08 @ top LED str r8,[r9] @turn off LED subs pc, r14, #4 IOMUX_ADDR12: .word 0xFC2A4000 @remapped IOMUX addr My handler returns just fine and normal system operation resumes if I disable it after the first go, otherwise it triggers constantly and the system appears to hang. Do you think my assumption is right that the core isn't acknowledging the AVIC or could there be another cause of this FIQ triggering?

    Read the article

  • Maintain one to one mapping between objects

    - by Rohan West
    Hi there, i have the following two classes that provide a one to one mapping between each other. How do i handle null values, when i run the second test i get a stackoverflow exception. How can i stop this recursive cycle? Thanks [TestMethod] public void SetY() { var x = new X(); var y = new Y(); x.Y = y; Assert.AreSame(x.Y, y); Assert.AreSame(y.X, x); } [TestMethod] public void SetYToNull() { var x = new X(); var y = new Y(); x.Y = y; y.X = null; Assert.IsNull(x.Y); Assert.IsNull(y.X); } public class X { private Y _y; public Y Y { get { return _y; } set { if(_y != value) { if(_y != null) { _y.X = null; } _y = value; if(_y != null) { _y.X = this; } } } } } public class Y { private X _x; public X X { get { return _x; } set { if (_x != value) { if (_x != null) { _x.Y = null; } _x = value; if (_x != null) { _x.Y = this; } } } } }

    Read the article

  • How to connect to SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5

    - by Luke
    Hi all, sorry for the blast. I'm trying to connect to an SQLServer 2k5 using Ruby 1.8.7 over W2k3 with active record 2.3.5. But, when I ran 'rake migrate' it throws the following: rake migrate --trace Hoe.new {...} deprecated. Switch to Hoe.spec. Invoke migrate (first_time) Invoke environment (first_time) Execute environment Execute migrate rake aborted! no such file to load -- odbc (...) C:/Program Files/test/Rakefile:146 (...) So, my Rakefile in the line 146 says: ActiveRecord::Migrator.migrate('db/migrate', ENV["VERSION"] ? ENV["VERSION"].to_i : nil ) The database.yml has been configured in so many ways without success. I've tried setup to mode in odbc, to configure a system dsn, to completely use the activerecord support for sqlserver but no success at all. The same Rakefile works fine over Postgres and Oracle with the proper gems installed off course. But I cann't get this work. Any help will be appreciated. Thanks in advance!

    Read the article

  • C-macro: set a register field defined by a bit-mask to a given value

    - by geschema
    I've got 32-bit registers with field defined as bit-masks, e.g. #define BM_TEST_FIELD 0x000F0000 I need a macro that allows me to set a field (defined by its bit-mask) of a register (defined by its address) to a given value. Here's what I came up with: #include <stdio.h> #include <assert.h> typedef unsigned int u32; /* * Set a given field defined by a bit-mask MASK of a 32-bit register at address * ADDR to a value VALUE. */ #define SET_REGISTER_FIELD(ADDR, MASK, VALUE) \ { \ u32 mask=(MASK); u32 value=(VALUE); \ u32 mem_reg = *(volatile u32*)(ADDR); /* Get current register value */ \ assert((MASK) != 0); /* Null masks are not supported */ \ while(0 == (mask & 0x01)) /* Shift the value to the left until */ \ { /* it aligns with the bit field */ \ mask = mask >> 1; value = value << 1; \ } \ mem_reg &= ~(MASK); /* Clear previous register field value */ \ mem_reg |= value; /* Update register field with new value */ \ *(volatile u32*)(ADDR) = mem_reg; /* Update actual register */ \ } /* Test case */ #define BM_TEST_FIELD 0x000F0000 int main() { u32 reg = 0x12345678; printf("Register before: 0x%.8X\n", reg);/* should be 0x12345678 */ SET_REGISTER_FIELD(&reg, BM_TEST_FIELD, 0xA); printf("Register after: 0x%.8X\n", reg); /* should be 0x123A5678 */ return 0; } Is there a simpler way to do it?

    Read the article

  • decimal.TryParse() drops leading "1"

    - by Martin Harris
    Short and sweet version: On one machine out of around a hundred test machines decimal.TryParse() is converting "1.01" to 0.01 Okay, this is going to sound crazy but bare with me... We have a client applications that communicates with a webservice through JSON, and that service returns a decimal value as a string so we store it as a string in our model object: [DataMember(Name = "value")] public string Value { get; set; } When we display that value on screen it is formatted to a specific number of decimal places. So the process we use is string - decimal then decimal - string. The application is currently undergoing final testing and is running on more than 100 machines, where this all works fine. However on one machine if the decimal value has a leading '1' then it is replaced by a zero. I added simple logging to the code so it looks like this: Log("Original string value: {0}", value); decimal val; if (decimal.TryParse(value, out val)) { Log("Parsed decimal value: {0}", val); string output = val.ToString(format, CultureInfo.InvariantCulture.NumberFormat); Log("Formatted string value: {0}", output); return output; } On my machine - any every other client machine - the logfile output is: Original string value: 1.010000 Parsed decimal value: 1.010000 Formatted string value: 1.01 On the defective machine the output is: Original string value: 1.010000 Parsed decimal value: 0.010000 Formatted string value: 0.01 So it would appear that the decimal.TryParse method is at fault. Things we've tried: Uninstalling and reinstalling the client application Uninstalling and reinstalling .net 3.5 sp1 Comparing the defective machine's regional settings for numbers (using English (United Kingdom)) to those of a working machine - no differences. Has anyone seen anything like this or has any suggestions? I'm quickly running out of ideas... While I was typing this some more info came in: Passing a string value of "10000" to Convert.ToInt32() returns 0, so that also seems to drop the leading 1.

    Read the article

  • documentFragment.cloneNode(true) doesn't clone jQuery data

    - by taber
    I have a documentFragment with several child nodes containing some .data() added like so: myDocumentFragment = document.createDocumentFragment(); for(...) { myDocumentFragment.appendChild( $('').addClass('button') .attr('href', 'javascript:void(0)') .html('click me') .data('rowData', { 'id': 103, 'test': 'testy' }) .get(0) ); } When I try to append the documentFragment to a div on the page: $('#div').append( myDocumentFragment ); I can access the data just fine: alert( $('#div a:first').data('rowData').id ); // alerts '103' But if I clone the node with cloneNode(true), I can't access the node's data. :( $('#div').append( myDocumentFragment.cloneNode(true) ); ... alert( $('#div a:first').data('rowData').id ); // alerts undefined Has anyone else done this or know of a workaround? I guess I could store the row's data in jQuery.data('#some_random_parent_div', 'rows', [array of ids]), but that kinda defeats the purpose of making the data immediately/easily available to each row. I've also read that jQuery uses documentFragments, but I'm not sure exactly how, or in what methods. Does anyone have any more details there? Thanks!

    Read the article

  • Jquery event to fire in ajax loaded content, IE & FF problem

    - by Sylph
    Hello, I'm trying to trigger onclick events in an ajax loaded content but it doesn't seem to work. Here is my code :- <li><a href="#" id="subtopic">Title</a></li> <script type="text/javascript"> $(function() { $("#subtopic").click(function() { url: test.html ,success: function(data) { $('#result').html(data); $(".filetree").treeview(); $('#result').click(); $('#result').trigger("update"); } }); }); }); </script> In my loaded content, :- <a href="#" id="addSubTopic">Click here</a> <script type="text/javascript"> $(document).ready(function() { alert("000"); }); $(function() { $("#addSubTopic").click(function() { alert("0"); $.ajax({ url: url, success: function(data) { $("#listRes").append(data); } }); }); }); </script> In IE, it works fine, the click works and I can get the "alert" but in Firefox, the whole thing just go dead. However, my jquery plugin for $(".filetree").treeview(); works fine for both IE and FF. Any advice? I have tried using .live, .trigger("update"). Thanks in advance

    Read the article

  • Speed of CSS

    - by Ólafur Waage
    This is just a question to help me understand CSS rendering better. Lets say we have a million lines of this. <div class="first"> <div class="second"> <span class="third">Hello World</span> </div> </div> Which would be the fastest way to change the font of Hello World to red? .third { color: red; } div.third { color: red; } div.second div.third { color: red; } div.first div.second div.third { color: red; } Also, what if there was at tag in the middle that had a unique id of "foo". Which one of the CSS methods above would be the fastest. I know why these methods are used etc, im just trying to grasp better the rendering technique of the browsers and i have no idea how to make a test that times it. UPDATE: Nice answer Gumbo. From the looks of it then it would be quicker in a regular site to do the full definition of a tag. Since it finds the parents and narrows the search for every parent found. That could be bad in the sense you'd have a pretty large CSS file though.

    Read the article

< Previous Page | 821 822 823 824 825 826 827 828 829 830 831 832  | Next Page >