Search Results

Search found 25727 results on 1030 pages for 'solution'.

Page 964/1030 | < Previous Page | 960 961 962 963 964 965 966 967 968 969 970 971  | Next Page >

  • C# Begin/EndReceive - how do I read large data?

    - by ryeguy
    When reading data in chunks of say, 1024, how do I continue to read from a socket that receives a message bigger than 1024 bytes until there is no data left? Should I just use BeginReceive to read a packet's length prefix only, and then once that is retrieved, use Receive() (in the async thread) to read the rest of the packet? Or is there another way? edit: I thought Jon Skeet's link had the solution, but there is a bit of a speedbump with that code. The code I used is: public class StateObject { public Socket workSocket = null; public const int BUFFER_SIZE = 1024; public byte[] buffer = new byte[BUFFER_SIZE]; public StringBuilder sb = new StringBuilder(); } public static void Read_Callback(IAsyncResult ar) { StateObject so = (StateObject) ar.AsyncState; Socket s = so.workSocket; int read = s.EndReceive(ar); if (read > 0) { so.sb.Append(Encoding.ASCII.GetString(so.buffer, 0, read)); if (read == StateObject.BUFFER_SIZE) { s.BeginReceive(so.buffer, 0, StateObject.BUFFER_SIZE, 0, new AyncCallback(Async_Send_Receive.Read_Callback), so); return; } } if (so.sb.Length > 0) { //All of the data has been read, so displays it to the console string strContent; strContent = so.sb.ToString(); Console.WriteLine(String.Format("Read {0} byte from socket" + "data = {1} ", strContent.Length, strContent)); } s.Close(); } Now this corrected works fine most of the time, but it fails when the packet's size is a multiple of the buffer. The reason for this is if the buffer gets filled on a read it is assumed there is more data; but the same problem happens as before. A 2 byte buffer, for exmaple, gets filled twice on a 4 byte packet, and assumes there is more data. It then blocks because there is nothing left to read. The problem is that the receive function doesn't know when the end of the packet is. This got me thinking to two possible solutions: I could either have an end-of-packet delimiter or I could read the packet header to find the length and then receive exactly that amount (as I originally suggested). There's problems with each of these, though. I don't like the idea of using a delimiter, as a user could somehow work that into a packet in an input string from the app and screw it up. It also just seems kinda sloppy to me. The length header sounds ok, but I'm planning on using protocol buffers - I don't know the format of the data. Is there a length header? How many bytes is it? Would this be something I implement myself? Etc.. What should I do?

    Read the article

  • MVC 2 AntiForgeryToken - Why symmetric encryption + IPrinciple?

    - by Brad R
    We recently updated our solution to MVC 2, and this has updated the way that the AntiForgeryToken works. Unfortunately this does not fit with our AJAX framework any more. The problem is that MVC 2 now uses symmetric encryption to encode some properties about the user, including the user's Name property (from IPrincipal). We are able to securely register a new user using AJAX, after which subsequent AJAX calls will be invalid as the anti forgery token will change when the user has been granted a new principal. There are also other cases when this may happen, such as a user updating their name etc. My main question is why does MVC 2 even bother using symmetric encryption? Any then why does it care about the user name property on the principal? If my understanding is correct then any random shared secret will do. The basic principle is that the user will be sent a cookie with some specific data (HttpOnly!). This cookie is then required to match a form variable sent back with each request that may have side effects (POST's usually). Since this is only meant to protect from cross site attacks it is easy to craft up a response that would easily pass the test, but only if you had full access to the cookie. Since a cross site attacker is not going to have access to your user cookies you are protected. By using symmetric encryption, what is the advantage in checking the contents of the cookie? That is, if I already have sent an HttpOnly cookie the attacker cannot override it (unless a browser has a major security issue), so why do I then need to check it again? After having a think about it it appears to be one of those 'added layer of security' cases - but if your first line of defence has fallen (HttpOnly) then the attacker is going to get past the second layer anyway as they have full access to the users cookie collection, and could just impersonate them directly, instead of using an indirect XSS/CSRF attack. Of course I could be missing a major issue, but I haven't found it yet. If there are some obvious or subtle issues at play here then I would like to be aware of them.

    Read the article

  • LINQ InsertOnSubmit Required Fields needed for debugging

    - by Derek Hunziker
    Hi All, I've been using the ADO.NET Strogly-Typed DataSet model for about 2 years now for handling CRUD and stored procedure executions. This past year I built my first MVC app and I really enjoyed the ease and flexibility of LINQ. Perhaps the biggest selling point for me was that with LINQ I didn't have to create "Insert" stored procedures that would return the SCOPE_IDENTITY anymore (The auto-generated insert statements in the DataSet model were not capable of this without modification). Currently, I'm using LINQ with ASP.NET 3.5 WebForms. My inserts are looking like this: ProductsDataContext dc = new ProductsDataContext(); product p = new product { Title = "New Product", Price = 59.99, Archived = false }; dc.products.InsertOnSubmit(p); dc.SubmitChanges(); int productId = p.Id; So, this product example is pretty basic, right, and in the future, I'll probably be adding more fields to the database such as "InStock", "Quantity", etc... The way I understand it, I will need to add those fields to the database table and then delete and re-add the tables to the LINQ to SQL Class design view in order to refresh the DataContext. Does that sound right? The problem is that any new fields that are non-null are NOT caught by the ASP.NET build processes. For example, if I added a non-null field of "Quantity" to the database, the code above would still build. In the DataSet model, the stored procedure method would accept a certain amount of parameters and would warn me that my Insert would fail if I didn't include a quantity value. The same goes for LINQ stored procedure methods, however, to my knowledge, LINQ doesn't offer a way to auto generate the insert statements and that means I'm back to where I started. The bottom line is if I used insert statements like the one above and I add a non-null field to my database, it would break my app in about 10-20 places and there would be no way for me to detect it. Is my only option to do a solution-side search for the keyword "products.InsertOnSubmit" and make sure the new field is getting assigned? Is there a better way? Thanks!

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

  • Fuzzy Regex, Text Processing, Lexical Analysis?

    - by justinzane
    I'm not quite sure what terminology to search for, so my title is funky... Here is the workflow I've got: Semi-structured documents are scanned to file. The files are OCR'd to text. The text is parsed into Python objects The objects are serialized (to SQL, JSON, whatever) for use. The documents are structures like this: HEADER blah blah, Page ### blah Garbage text... 1. Question Text... continued until now. A. Choice text... adsadsf. B. Another Choice... 2. Another Question... I need to extract the questions and choices. The problem is that, because the text is OCR output, there are occasional strange substitutions like '2' - 'Z' which makes ordinary regular expressions useless. I've tried the Levenshtein module and it helps, but it requires prior knowledge of what edit distance is to be expected. I don't know whether I'm looking to create a parser? a lexer? something else? This has lead me down all kinds of interesting but nonrelevant paths. Guidance would be greatly appreciated. Oh, also, the text is generally from specific technical domains, so general spelling tools are not so helpful. Regarding the structure of the documents, there is no clear visual pattern -- like line breaks or indentation -- with the exception of the fact that "questions" usually begin a line. Crap on the document can cause characters to appear before the actual beginning of the line, which means that something along the lines of r'^[0-9]+' does not reliably work. Though the "questions" always begin with an int, a period and a space; the OCR can substitute other characters or skip characters. This is not so much a problem with Tesseract or Cunieform, rather with the poor quality of the paper documents. # Note: for the project in question, it was decided that having a human prep the OCR'd text was better that spending the time coding a solution. I'd still love good pointers, however.

    Read the article

  • Maven-ear-plugin - excluding multiple modules i.e. jars, wars etc.

    - by James Murphy
    I've been using the Maven EAR plugin for creating my ear files for a new project. I noticed in the plugin documentation you can specify exclude statements for modules. For example the configuration for my plugin is as follows... <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-ear-plugin</artifactId> <version>2.4.1</version> <configuration> <jboss> <version>5</version> </jboss> <modules> <!-- Include the templatecontroller.jar inside the ear --> <jarModule> <groupId>com.kewill.kdm</groupId> <artifactId>templatecontroller</artifactId> <bundleFileName>templatecontroller.jar</bundleFileName> <includeInApplicationXml>true</includeInApplicationXml> </jarModule> <!-- Exclude the following classes from the ear --> <jarModule> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <excluded>true</excluded> </jarModule> <jarModule> <groupId>antlr</groupId> <artifactId>antlr</artifactId> <excluded>true</excluded> </jarModule> ... declare multiple excludes <security> <security-role id="SecurityRole_1234"> <role-name>admin</role-name> </security-role> </security> </configuration> </plugin> </plugins> This approach is absolutely fine with small projects where you have say 4-5 modules to exclude. However, in my project I have 30+ and we've only just started the project so as it expands this is likely to grow. Besides explicitly declaring exclude statements per module is it possible to use wildcards or and exclude all maven dependencies flag to only include those modules i declare and exclude everything else? Is anyone aware of a more elegant solution?

    Read the article

  • Stand-alone Java code formatter/beautifier/pretty printer?

    - by Greg Mattes
    I'm interested in learning about the available choices of high-quality, stand-alone source code formatters for Java. The formatter must be stand-alone, that is, it must support a "batch" mode that is decoupled from any particular development environment. Ideally it should be independent of any particular operating system as well. So, a built-in formatter for the IDE du jour is of little interest here (unless that IDE supports batch mode formatter invocation, perhaps from the command line). A formatter written in closed-source C/C++ that only runs on, say, Windows is not ideal, but is somewhat interesting. To be clear, a "formatter" (or "beautifier") is not the same as a "style checker." A formatter accepts source code as input, applies styling rules, and produces styled source code that is semantically equivalent to the original source code. A style checker also applies styling rules, but it simply reports rule violations without producing modified source code as output. So the picture looks like this: Formatter (produces modified source code that conforms to styling rules) Read Source Code → Apply Styling Rules → Write Styled Source Code Style Checker (does not produce modified source code) Read Source Code → Apply Styling Rules → Write Rule Violations Further Clarifications Solutions must be highly configurable. I want to be able to specify my own style, not simply select from a canned list. Also, I'm not looking for a general purpose pretty-printer written in Java that can pretty-print many things. I want to style Java code. I'm also not necessarily interested in a grand-unified formatter for many languages. I suppose it might be nice for a solution to have support for languages other than Java, but that is not a requirement. Furthermore, tools that only perform code highlighting are right out. I'm also not interested in a web service. I want a tool that I can run locally. Finally, solutions need not be restricted to open source, public domain, shareware, free software, commercial, or anything else. All forms of licensing are acceptable.

    Read the article

  • Android - How to scan Access Points and select strongest signal?

    - by Donal Rafferty
    I am currently trying to write a class in Android that will Scan for access points, calculate which access point has the best signal and then connect to that access point. So the application will be able to scan on the move and attach to new access points on the go. I have the scanning and calculation of the best signal working. But when it comes to attaching to the best access point I am having trouble. It appears that enableNetwork(netid, othersTrueFalse) is the only method for attaching to an Access point but this causes problems as from my Scan Results I am not able to get the id of the access point with the strongest signal. This is my code: public void doWifiScan(){ scanTask = new TimerTask() { public void run() { handler.post(new Runnable() { public void run() { sResults = wifiManager.scan(getBaseContext()); if(sResults!=null) Log.d("TIMER", "sResults count" + sResults.size()); ScanResult scan = wifiManager.calculateBestAP(sResults); wifiManager.addNewAccessPoint(scan); } }); }}; t.schedule(scanTask, 3000, 30000); } public ScanResult calculateBestAP(List<ScanResult> sResults){ ScanResult bestSignal = null; for (ScanResult result : sResults) { if (bestSignal == null || WifiManager.compareSignalLevel(bestSignal.level, result.level) < 0) bestSignal = result; } String message = String.format("%s networks found. %s is the strongest. %s is the bsid", sResults.size(), bestSignal.SSID, bestSignal.BSSID); Log.d("sResult", message); return bestSignal; } public void addNewAccessPoint(ScanResult scanResult){ WifiConfiguration wc = new WifiConfiguration(); wc.SSID = '\"' + scanResult.SSID + '\"'; //wc.preSharedKey = "\"password\""; wc.hiddenSSID = true; wc.status = WifiConfiguration.Status.ENABLED; wc.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.TKIP); wc.allowedGroupCiphers.set(WifiConfiguration.GroupCipher.CCMP); wc.allowedKeyManagement.set(WifiConfiguration.KeyMgmt.WPA_PSK); wc.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.TKIP); wc.allowedPairwiseCiphers.set(WifiConfiguration.PairwiseCipher.CCMP); wc.allowedProtocols.set(WifiConfiguration.Protocol.RSN); int res = mainWifi.addNetwork(wc); Log.d("WifiPreference", "add Network returned " + res ); boolean b = mainWifi.enableNetwork(res, false); Log.d("WifiPreference", "enableNetwork returned " + b ); } When I try to use addNewAccessPoint(ScanResult scanResult) it just adds another AP to the list in the settings application with the same name as the one with the best signal, so I end up with loads of duplicates and not actually attaching to them. Can anyone point me in the direction of a better solution?

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

  • Why does Font.registerFont throw an error the second time a swf is loaded?

    - by Al Brown
    I've found an issue (in flash cs5) when using TLFTextfields and fonts shared at runtime, and I wondered if anyone has a solution. Here's the scenario. Fonts.swf: library for fonts (runtime shared DF4) Popup.swf: popup with a TLFTextfield (with text in it) using a font from Fonts.swf Main.swf : the main swf with a button that loads and unloads Popup.swf (using Loader or SafeLoader give the same results) Nice and simple, Run main.swf, click button and the popup appears with the text. Click the button again to remove the popup. All well and good now click the button again and I get the following error. ArgumentError: Error #1508: The value specified for argument font is invalid. at flash.text::Font$/registerFont() at Popup_fla::MainTimeline()[Popup_fla.MainTimeline::MainTimeline:12] I'm presuming it's happening because the font is already registered (checked when clicking before the load). Does anyone know how to get past this? If you are wondering here's my Main.as package { import fl.controls.Button; import flash.display.Loader; import flash.display.Sprite; import flash.events.Event; import flash.events.IOErrorEvent; import flash.events.MouseEvent; import flash.events.UncaughtErrorEvent; import flash.net.URLRequest; import flash.text.Font; public class Main extends Sprite { public var testPopupBtn:Button; protected var loader:Loader; public function Main() { trace("Main.Main()"); testPopupBtn.label = "open"; testPopupBtn.addEventListener(MouseEvent.CLICK, testClickHandler); } protected function testClickHandler(event:MouseEvent):void { trace("Main.testClickHandler(event)"); if(loader) { testPopupBtn.label = "open"; this.removeChild(loader); //loader.unloadAndStop(); loader = null; }else{ testPopupBtn.label = "close"; trace("Registered Fonts -->"); var fonts:Array = Font.enumerateFonts(false); for each (var font:Font in fonts) { trace("\t",font.fontName, font.fontStyle, font.fontType); } trace("<--"); loader = new Loader(); loader.uncaughtErrorEvents.addEventListener(UncaughtErrorEvent.UNCAUGHT_ERROR, uncaughtErrorHandler); this.addChild(loader); try{ loader.load(new URLRequest("Popup.swf")); }catch(e:*){ trace(e); } } } private function uncaughtErrorHandler(event:UncaughtErrorEvent):void { trace("Main.uncaughtErrorHandler(event)", event); } } }

    Read the article

  • How to store wiki sites (vcs)

    - by Eugen
    Hello, as a personal project I am trying to write a wiki with the help of django. I'm a beginner when it comes to web development. I am at the (early) point where I need to decide how to store the wiki sites. I have three approaches in mind and would like to know your suggestion. Flat files I considered a flat file approach with a version control system like git or mercurial. Firstly, I would have some example wikis to look at like http://hatta.sheep.art.pl/. Secondly, the vcs would probably deal with editing conflicts and keeping the edit history, so I would not have to reinvent the wheel. And thirdly, I could probably easily clone the wiki repository, so I (or for that matter others) can have an offline copy of the wiki. On the other hand, as far as I know, I can not use django models with flat files. Then, if I wanted to add fields to a wiki site, like a category, I would need to somehow keep a reference to that flat file in order to associate the fields in the database with the flat file. Besides, I don't know if it is a good idea to have all the wiki sites in one repository. I imagine it is more natural to have kind of like a repository per wiki site resp. file. Last but not least, I'm not sure, but I think using flat files would limit my deploying capabilities because web hosts maybe don't allow creating files (I'm thinking, for example, of Google App Engine) Storing in a database By storing the wiki sites in the database I can utilize django models and associate arbitrary fields with the wiki site. I probably would also have an easier life deploying the wiki. But I would not get vcs features like history and conflict resolving per se. I searched for django-extensions to help me and I found django-reversion. However, I do not fully understand if it fit my needs. Does it track model changes like for example if I change the django model file, or does it track the content of the models (which would fit my need). Plus, I do not see if django reversion would help me with edit conflicts. Storing a vcs repository in a database field This would be my ideal solution. It would combine the advantages of both previous approaches without the disadvantages. That is; I would have vcs features but I would save the wiki sites in a database. The problem is: I have no idea how feasible that is. I just imagine saving a wiki site/source together with a git/mercurial repository in a database field. Yet, I somehow doubt database fields work like that. So, I'm open for any other approaches but this is what I came up with. Also, if you're interested, you can find the crappy early test I'm working on here http://github.com/eugenkiss/instantwiki-test

    Read the article

  • $.ajax post to PHP mail() using jQuery doesn't work

    - by Vian Esterhuizen
    Hey, I looked around on SO and Google, but couldn't really find a solution. Could someone please explain to me why this doesn't work? This is my first time using AJAX to submit a form. $.ajax({ type: "POST", url: "<?php bloginfo('template_url'); ?>/email.php", data: dataString, success: function() { $('#contact_form').html("<div id='message'></div>"); $('#message').html("<h2>Contact Form Submitted!</h2>") .append("<p>We will be in touch soon.</p>") .hide() .fadeIn(1500, function() { $('#message').append("<img id='checkmark' src='images/check.png' />"); }); } }); and <?php $name=$_POST['name']; $email=$_POST['email']; $phone=$_POST['phone']; $subject=$_POST['subject']; $message=$_POST['message']; $quote=$_POST['quote']; $to="[email protected]"; $body="<p>From: ".$name." <".$email."><br />"; $body.="Phone: ".$phone."<br />"; $body.="Project: ".$subject."<br />"; $body.="Quote: ".$quote."</p>"; $body.="<p>Message:<br /> ".$message."</p>"; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type: text/html; charset=iso-8859-1" . "\r\n"; $headers .= "From: ".$name."<".$email.">\r\n"; $headers .= "Reply-To: ".$name."<".$email.">\r\n"; $subject="New message from FloorShop.ca about \"".$subject."\""; @mail($to,$subject,$body,$headers); echo $name,$email,$phone,$subject,$message,$quote; ?

    Read the article

  • C++: Implementing Named Pipes using the Win32 API

    - by Mike Pateras
    I'm trying to implement named pipes in C++, but either my reader isn't reading anything, or my writer isn't writing anything (or both). Here's my reader: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); char data[1024]; DWORD numRead = 1; while (numRead >= 0) { ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data; } return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } And here's my writer: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); string message = "Hi"; WriteFile(pipe, message.c_str(), message.length() + 1, NULL, NULL); return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } Does that look right? numRead in the reader is always 0, for some reason, and it reads nothing but 1024 -54's (some weird I character). Solution: Reader (Server): while (true) { HANDLE pipe = CreateNamedPipe(GetPipeName(), PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND , PIPE_WAIT, 1, 1024, 1024, 120 * 1000, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } char data[1024]; DWORD numRead; ConnectNamedPipe(pipe, NULL); ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data << endl; CloseHandle(pipe); } Writer (client): HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } string message = "Hi"; cout << message.length(); DWORD numWritten; WriteFile(pipe, message.c_str(), message.length(), &numWritten, NULL); return 0; The server blocks until it gets a connected client, reads what the client writes, and then sets itself up for a new connection, ad infinitum. Thanks for the help, all!

    Read the article

  • CSS to Replace Table Layout for Forms

    - by Erick
    I've looked at other questions and am unable to find the solution to this. Consider this image: I want to wrap divs and stack them vertically. The GREEN div would be a wrapper on a line. The BLUE div would contain an html label and maybe icon for a tooltip. The ORANGE div would contain some sort of entry (input, select, textarea). Several of these would be stacked vertically to make up a form. I am doing this now, but I have to specify a height for the container div and that really needs to change depending on the content - considering any entry could land there. Images and other stuff could land here, as well. I have a width set on the BLUE div and the ORANGE is float:left. How can I get rid of the height on divs and let that be determined by content? Is there a better way? Changing all to something else would be difficult and would prefer a way to style all elements or something. The code I'm using is like: <div class=EntLine> <div class=EntLbl> <label for="Name">Name</label> </div> <div class=EntFld> <input type=text id="Name" /> </div> </div> The CSS looks like: .EntLine { height: 20px; position: relative; margin-top: 2px; text-align: left; white-space: nowrap; } .EntLbl { float: left; width: 120px; padding: 3px 0px 0px 3px; min-width: 120px; max-width: 120px; vertical-align: text-top; } .EntFld { float: left; height: 20px; padding: 0px; width: 200px; } Thanks!

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

  • addSubview and autosizing

    - by neoneye
    How does one add views to a window, so that the views are resized to fit within the window frame? The problem I'm making a sheet window containing 2 views, where only one of them is visible at a time, so it's important that the views have the same size as the window. My problem is that either view0 fits correctly and view1 doesn't or the other way around. I can't figure out how to give them the same size as the window. Possible solution I could just make sure that both views have precisely the same size within Interface Builder, then it would work. However I'm looking for a way to do this programmatically. Screenshot of view0 Below you can see the autoresizing problem in the top and the right side, where the view is somehow clipped. Screenshot of view1 This view is resized correctly. Here is my code Can the views be resized before adding them to the window. Or is it better to do as I do now where the views are added one by one while changing the window frame. How do you do it? NSView* view0 = /* a view made with IB */; NSView* view1 = /* another view made with IB */; NSWindow* window = [self window]; NSRect window_frame = [window frame]; NSView* cv = [[[NSView alloc] initWithFrame:window_frame] autorelease]; [window setContentView:cv]; [cv setAutoresizesSubviews:YES]; // add subview so it fits within the contentview frame { NSView* v = view0; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // add subview so it fits within the contentview frame { NSView* v = view1; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // restore original window frame [window setFrame:window_frame display:YES]; [view0 setHidden:NO]; [view1 setHidden:YES];

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • How to efficiently save changes made in UI/main thread with Core Data?

    - by Jaanus
    So, there have been several posts here about importing and saving data from an external data source into Core Data. Apple documents a reasonable pattern for this: "import and save on background thread, merge saved objects to main thread." All fine and good. I have a related but different problem: the user is modifying data in the UI and main thread, and thus modifies state of some objects in the managed object context (MOC). I would like to save these changes from time to time. What is a good way to do that? Now, you could say that I could do the same: create a background thread with its own MOC and pass the changed objectID-s there. The catch-22 for me with this is that an object's ID changes when it is saved, and I cannot guarantee the order of things happening. I may end up passing a different objectID into the background thread for the same object, based on whether the object has been previously saved or not, and I don't know if Core Data can resolve this and see that different objectID-s are pointing to the same object and not create duplicates for me. (I could test this, but I'm lazywebbing with this question first.) One thought I had: I could always do MOC saves on a background thread, and queue them up with operationqueue, so that there is always only one save in progress. I would not create a new MOC, I would just use the same MOC as in main thread. Now, this is not thread safe and when someone modifies the MOC in main thread while it is being saved in background thread, the results will probably be catastrophic. But, minus the thread safety, you can see what kind of solution I'd wish for. To be clear, the problem I need to fix is that if I just do the save in main thread, it blocks the UI for an unacceptably long period of time, I want to move the save to background thread. So, questions: what about the reasoning of an object ID changing during saving, and Core Data being able to resolve them to the same object? Would this be the right way of addressing this problem? any other good ways of doing this?

    Read the article

  • Averaging initial values for rolling series

    - by Dave Jarvis
    Question Given a maximum sliding window size of 40 (i.e., the set of numbers in the list cannot exceed 40), what is the calculation to ensure a smooth averaging transition as the set size grows from 1 to 40? Problem Description Creating a trend line for a set of data has skewed initial values. The complete set of values is unknown at runtime: they are provided one at a time. It seems like a reverse-weighted average is required so that the initial values are averaged differently. In the image below the leftmost data for the trend line are incorrectly averaged. Current Solution Created a new type of ArrayList subclass that calculates the appropriate values and ensures its size never goes beyond the bounds of the sliding window: /** * A list of Double values that has a maximum capacity enforced by a sliding * window. Can calculate the average of its values. */ public class AveragingList extends ArrayList<Double> { private float slidingWindowSize = 0.0f; /** * The initial capacity is used for the sliding window size. * @param slidingWindowSize */ public AveragingList( int slidingWindowSize ) { super( slidingWindowSize ); setSlidingWindowSize( ( float )slidingWindowSize ); } public boolean add( Double d ) { boolean result = super.add( d ); // Prevent the list from exceeding the maximum sliding window size. // if( size() > getSlidingWindowSize() ) { remove( 0 ); } return result; } /** * Calculate the average. * * @return The average of the values stored in this list. */ public double average() { double result = 0.0; int size = size(); for( Double d: this ) { result += d.doubleValue(); } return (double)result / (double)size; } /** * Changes the maximum number of numbers stored in this list. * * @param slidingWindowSize New maximum number of values to remember. */ public void setSlidingWindowSize( float slidingWindowSize ) { this.slidingWindowSize = slidingWindowSize; } /** * Returns the number used to determine the maximum values this list can * store before it removes the first entry upon adding another value. * @return The maximum number of numbers stored in this list. */ public float getSlidingWindowSize() { return slidingWindowSize; } } Resulting Image Example Input The data comes into the function one value at a time. For example, data points (Data) and calculated averages (Avg) typically look as follows: Data: 17.0 Avg : 17.0 Data: 17.0 Avg : 17.0 Data: 5.0 Avg : 13.0 Data: 5.0 Avg : 11.0  Related Sites The following pages describe moving averages, but typically when all (or sufficient) data is known: http://www.cs.princeton.edu/introcs/15inout/MovingAverage.java.html http://stackoverflow.com/questions/2161815/r-zoo-series-sliding-window-calculation http://taragana.blogspot.com/ http://www.dreamincode.net/forums/index.php?showtopic=92508 http://blogs.sun.com/nickstephen/entry/dtrace_and_moving_rolling_averages

    Read the article

  • NSDocument Subclass not closed by NSWindowController?

    - by Nathan Douglas
    Okay, I'm fairly new to Cocoa and Objective-C, and to OOP in general. As background, I'm working on an extensible editor that stores the user's documents in a package. This of course required some "fun" to get around some issues with NSFileWrapper (i.e. a somewhat sneaky writing and loading process to avoid making NSFileWrappers for every single document within the bundle). The solution I arrived at was to essentially treat my NSDocument subclass as just a shell -- use it to make the folder for the bundle, and then pass off writing the actual content of the document to other methods. Unfortunately, at some point I seem to have completely screwed the pooch. I don't know how this happened, but closing the document window no longer releases the document. The document object doesn't seem to receive a "close" message -- or any related messages -- even though the window closes successfully. The end result is that if I start my app, create a new document, save it, then close it, and try to reopen it, the document window never appears. With some creative subclassing and NSLogging, I managed to figure out that the document object was still in memory, and still attached to the NSDocumentController instance, and so trying to open the document never got past the NSDocumentController's "hmm, currently have that one open" check. I did have an NSWindowController and NSDocumentController instance, but I've purged them from my project completely. I've overridden nearly every method for NSDocument trying to find out where the issue is. So far as I know, my Interface Builder bindings are all correct -- "Close" in the main menu is attached to "performClose:" of the First Responder, etc, and I've tried with fresh unsullied MainMenu and Document xibs as well. I thought that it might be something strange with my bundle writing code, so I basically deleted it all and started from scratch, but that didn't seem to work. I took out my init method overrides, and that didn't help either. I don't have the source of any simple document apps here, so I didn't try the next logical step (to substitute known-working code for mine in the readfromurl and writetourl methods). I've had this problem for about sixteen hours of uninterrupted troubleshooting now, and needless to say, I'm at the end of my rope. If I can't figure it out, I guess I'm going to try the project from scratch with a lot more code and intensity based around the bundle-document mess. Any help would be greatly appreciated.

    Read the article

  • Custom EntityNotFoundDelegate

    - by Felix
    Hi all, I get a org.hibernate.ObjectNotFoundException when I want to delete an object which doesn't exist anymore via hibernate. I just want this exception to be ignored. I could catch the exception and ignore, this would be a solution maybe. But, since there is a hibernate support for ignoring this exception through org.hibernate.cfg.Configuration#entityNotFoundDelegate, I would like to use its advantage and control it using configuration. The question is then, how can I introduce my own/custom implementation of EntityNotFoundDelegate to the org.hibernate.cfg.Configuration? Does anybody have a sample code for me? Just an additional tip, I use Spring Framework as well in my project. Here is the exception that I get: Caused by: org.springframework.orm.hibernate3.HibernateObjectRetrievalFailureException: No row with the given identifier exists: [de.mycompany.domain.ResultObject#810b1334-32d3-02b0-e044-769e0ab48e48]; nested exception is org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [de.mycompany.domain.ResultObject#810b1334-32d3-02b0-e044-769e0ab48e48] at org.springframework.orm.hibernate3.SessionFactoryUtils.convertHibernateAccessException(SessionFactoryUtils.java:660) at org.springframework.orm.hibernate3.HibernateAccessor.convertHibernateAccessException(HibernateAccessor.java:412) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:424) at org.springframework.orm.hibernate3.HibernateTemplate.executeWithNativeSession(HibernateTemplate.java:374) at org.springframework.orm.hibernate3.HibernateTemplate.delete(HibernateTemplate.java:865) at org.springframework.orm.hibernate3.HibernateTemplate.delete(HibernateTemplate.java:859) at de.mycompany.utils.dao.impl.PersistentDaoImpl.delete(PersistentDaoImpl.java:50) at de.mycompany.utils.service.ServiceImpl.delete(ServiceImpl.java:68) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at de.mycompany.utils.service.ServiceInterceptor.invoke(ServiceInterceptor.java:43) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy3.delete(Unknown Source) ... 14 more Caused by: org.hibernate.ObjectNotFoundException: No row with the given identifier exists: [de.mycompany.domain.ResultObject#810b1334-32d3-02b0-e044-769e0ab48e48] at org.hibernate.impl.SessionFactoryImpl$2.handleEntityNotFound(SessionFactoryImpl.java:409) at org.hibernate.proxy.AbstractLazyInitializer.checkTargetState(AbstractLazyInitializer.java:108) at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:97) at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:140) at org.hibernate.engine.StatefulPersistenceContext.unproxyAndReassociate(StatefulPersistenceContext.java:594) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:90) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:74) at org.hibernate.impl.SessionImpl.fireDelete(SessionImpl.java:793) at org.hibernate.impl.SessionImpl.delete(SessionImpl.java:778) at org.springframework.orm.hibernate3.HibernateTemplate$26.doInHibernate(HibernateTemplate.java:871) at org.springframework.orm.hibernate3.HibernateTemplate.doExecute(HibernateTemplate.java:419) ... 30 more And my versions: Hibernate: 3.3.1 Spring: 2.5.6 Thanks in advance! Felix

    Read the article

  • How do I access SourceGear Web services using SOAP::Lite?

    - by user565793
    For some reason SourceGear provide an undocumented Web service on their installations. They actually ask developers to use the API instead because the Web service is kinda messy, but this is a problem in my case because I cannot use this API on a Perl environment, so their solution for my specific case is to use the Web service. This shouldn't be a problem. Using SOAP::Lite I have connected to several Web services in the past in the same way. But the lack of documentation is a major chaos if you don't know where the SOAP calls can be made. I only have an XML to decipher where to and how to make these calls. It would be great if a real SOAP genius could help me out in this. This is an example of the login call and the expected response: Request POST /fortress/dragnetwebservice.asmx HTTP/1.1 Host: velecloudserver Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://www.sourcegear.com/schemas/dragnet/LoginPlainText" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainText xmlns="http://www.sourcegear.com/schemas/dragnet"> <strLogin>string</strLogin> <strPassword>string</strPassword> </LoginPlainText> </soap:Body> </soap:Envelope> Response HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainTextResponse xmlns="http://www.sourcegear.com/schemas/dragnet"> <LoginPlainTextResult>int</LoginPlainTextResult> <strAuthTicket>string</strAuthTicket> </LoginPlainTextResponse> </soap:Body> </soap:Envelope> I'm looking for a way to be able to assemble this somehow. This is my Perl example: my $soap = SOAP::Lite -> uri ('http://velecloudserver/fortress/dragnetwebservice.asmx') -> proxy('http://velecloudserver/fortress/dragnetwebservice.asmx/LoginPlainText'); my $som = $soap->call('LoginPlainText', SOAP::Data->name('LoginPlainText')->value( \SOAP::Data->value([ SOAP::Data->name('strLogin')->value( 'admin' ), SOAP::Data->name('strPassword')->value('Adm1234'), ])) ); Any tip would be appreciated.

    Read the article

  • Sql Server Compact 2005 on Visual Studio 2008

    - by Tim
    I'm working on a Windows Forms application that interacts with a Sql Compact database file created by SQL Server 2005. This application was originally developed in Visual Studio 2005 but was recently converted to a Visual Studio 2008 solution. In regards to Sql Compact, we made sure the references were all still set to the assemblies that handle the 2005 version of Sql Compact rather than Sql Compact 3.5. Having done this, the application still runs just as it should - it will still interact with the Compact database, perform synchronization operations, etc. However, I just discovered today that Visual Studio tools such as the DataSet Designer do not play well with a Sql Compact database file of an older version than 3.5. If I go to the New Connection... wizard, the only Sql Compact Data Source / Data Provider are for Sql Compact 3.5. I assume that Visual Studio 2008 just doesn't include the data provider for the older version of Sql Compact by default. Is there a way you can add the old version of Sql Compact to the list of "Data Sources" for the connection wizard? To see exactly what I'm referring to, click on the Tools menu of Visual Studio 2008 and click Connect to Database... In the window that comes up, click Change... next to the Data source setting. From this dialog there is no way I can select the earlier version of Sql Compact - only 3.5 is available. Maybe I need to add an assembly reference somewhere? Or copy some file(s) from my Visual Studio 2005 directory over to 2008? I would think there would have to be a way for Visual Studio 2008 to be able to interact with a Sql Compact database from Sql Server 2005. To provide one more bit of detail, I discovered this problem when I went to my DataSet, right-clicked and tried to add a TableAdapter. The first screen that comes up says, "Choose Your Data Connection". If I leave it set to the Sql Compact connection that we've always used, I now get the following error when clicking the Next button: Failed to open a connection to the database "The selected database was created with an earlier version of SQL Server Compact and needs to be upgraded to SQL Server Compact 3.5 before the connection can be opened or tested. Upgrade the database by creating a new data connection and completing the Add Connection dialog box." Check the connection and try again. The only problem here is that we still use Sql Server 2005, and if my understanding is correct, it does not produce subscription files that are compatible with Sql Compact 3.5. If I am wrong in this assumption, please correct me. Any help you can provide is greatly appreciated. Thank you.

    Read the article

  • NSTimer as a self-targeting ivar.

    - by Matt Wilding
    I have come across an awkward situation where I would like to have a class with an NSTimer instance variable that repeatedly calls a method of the class as long as the class is alive. For illustration purposes, it might look like this: // .h @interface MyClock : NSObject { NSTimer* _myTimer; } - (void)timerTick; @end - // .m @implementation MyClock - (id)init { self = [super init]; if (self) { _myTimer = [[NSTimer scheduledTimerWithTimeInterval:1.0f target:self selector:@selector(timerTick) userInfo:nil repeats:NO] retain]; } return self; } - (void)dealloc { [_myTimer invalidate]; [_myTImer release]; [super dealloc]; } - (void)timerTick { // Do something fantastic. } @end That's what I want. I don't want to to have to expose an interface on my class to start and stop the internal timer, I just want it to run while the class exists. Seems simple enough. But the problem is that NSTimer retains its target. That means that as long as that timer is active, it is keeping the class from being dealloc'd by normal memory management methods because the timer has retained it. Manually adjusting the retain count is out of the question. This behavior of NSTimer seems like it would make it difficult to ever have a repeating timer as an ivar, because I can't think of a time when an ivar should retain its owning class. This leaves me with the unpleasant duty of coming up with some method of providing an interface on MyClock that allows users of the class to control when the timer is started and stopped. Besides adding unneeded complexity, this is annoying because having one owner of an instance of the class invalidate the timer could step on the toes of another owner who is counting on it to keep running. I could implement my own pseudo-retain-count-system for keeping the timer running but, ...seriously? This is way to much work for such a simple concept. Any solution I can think of feels hacky. I ended up writing a wrapper for NSTimer that behaves exactly like a normal NSTimer, but doesn't retain its target. I don't like it, and I would appreciate any insight.

    Read the article

  • Window Media Player issues two requests for the audio on web page

    - by Ron Harlev
    I'm using Windows Media Player in a web page. I have version 11 installed so that is the version I'm testing with right now. The player is embedded on the page with this HTML: <OBJECT id='MS_mediaPlayer' width="400" height="45" classid='CLSID:6BF52A52-394A-11D3-B153-00C04F79FAA6' codebase='http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701' standby='Loading Microsoft Windows Media Player components...' type='application/x-oleobject'> <param name='autoStart' value="false"> <param name='uiMode' value="invisible"> <param name='loop' value="false"> </OBJECT> I'm calling in JavaScript: MS_mediaPlayer.URL = "SomeAudioFile.mp3" MS_mediaPlayer.controls.play(); When I look at Fiddler I can see that the player actually downloads "SomeAudioFile.mp3" twice. Is there some setting I have wrong? I was trying to set the "autoPlay" to true and avoid calling "play()". Got the same result - two downloads. UPDATE: The first request's user-agent is "Windows-Media-Player/11.0.5721.5268". The second has "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)". Looks like the browser is running the same request the second time. No Idea why Any ideas? UPDATE (4/1/10): Still no solution. I debugged the JS thoroughly and there is only one call to MediaPlayer.URL='.....' to set the audio file. Nothing else triggers the media player to load the file and there is no other place referencing the audio file on the page. One other interesting fact is that this doesn't happen (the double loading of the audio) when I run the browser locally on my development web server. But other remote requests to the same web server generate the double audio loading. I believe I eliminated any correlation with specific IE version or media player version. This happens with IE6-8 and WM9-12

    Read the article

< Previous Page | 960 961 962 963 964 965 966 967 968 969 970 971  | Next Page >