Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 469/590 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • C# program that generates html pages - limit image dimensions on page

    - by Professor Mustard
    I have a C# program that generates a large number of html pages, based on various bits of data and images that I have stored on the file system. The html itself works just fine, but the images can vary greatly in their dimensions. I need a way to ensure that a given image won't exceed a certain size on the page. The simplest way to accomplish this would be through the html itself... if there was some kind of "maxwidth" or "maxheight" property I could set in the html, or maybe a way to force the image to fit inside a table cell (if I used something like this, I'd have to be sure that the non-offending dimension would automatically scale with the one that's being reduced). The problem is, I don't know much about this "fine tuning" kind of stuff in html, and it seems to be a tough thing to Google around for (most of the solutions involve some sort of html specialization; I'm just using plain html). Alternatively, I could determine the width and height of each image at runtime by examining the image in C#, and then setting width/height values in the html if the image's dimensions exceed a certain value. The problem here is that is seems incredibly inefficient to load an entire image into memory, just to get its dimensions. I would need a way to "peek" at an image and just get its size (it could be bmp, jpg, gif or png). Any recommendations for either approach would be greatly appreciated.

    Read the article

  • how to escape white space in bash loop list

    - by MCS
    I have a bash shell script that loops through all child directories (but not files) of a certain directory. The problem is that some of the directory names contain spaces. Here are the contents of my test directory: $ls -F test Baltimore/ Cherry Hill/ Edison/ New York City/ Philadelphia/ cities.txt And the code that loops through the directories: for f in `find test/* -type d`; do echo $f done Here's the output: test/Baltimore test/Cherry Hill test/Edison test/New York City test/Philadelphia Cherry Hill and New York City are treated as 2 or 3 separate entries. I tried quoting the filenames, like so: for f in `find test/* -type d | sed -e 's/^/\"/' | sed -e 's/$/\"/'`; do echo $f done but to no avail. There's got to be a simple way to do this. Any ideas? The answers below are great. But to make this more complicated - I don't always want to use the directories listed in my test directory. Sometimes I want to pass in the directory names as command-line parameters instead. I took Charles' suggestion of setting the IFS and came up with the following: dirlist="${@}" ( [[ -z "$dirlist" ]] && dirlist=`find test -mindepth 1 -type d` && IFS=$'\n' for d in $dirlist; do echo $d done ) and this works just fine unless there are spaces in the command line arguments (even if those arguments are quoted). For example, calling the script like this: test.sh "Cherry Hill" "New York City" produces the following output: Cherry Hill New York City Again, I know there must be a way to do this - I just don't know what it is...

    Read the article

  • Delphi: Why does IdHTTP.ConnectTimeout make requests slower?

    - by K.Sandell
    I discovered that when setting the ConnectTimeoout property for a TIdHTTP component, it makes the requests (GET and POST) become about 120ms slower? Why is this, and can I avoid/bypass this somehow? Env: D2010 with shipped Indy components, all updates installed for D2010. OS is WinXP (32bit) SP3 with most patches... My timing routine is: Procedure DoGet; Var Freq,T1,T2 : Int64; Cli : TIdHTTP; S : String; begin QueryPerformanceFrequency(Freq); Try QueryPerformanceCounter(T1); Cli := TIdHTTP.Create( NIL ); Cli.ConnectTimeout := 1000; // without this we get < 15ms!! S := Cli.Get('http://127.0.0.1/empty_page.php'); Finally FreeAndNil(Cli); QueryPerformanceCounter(T2); End; Memo1.Lines.Add('Time = '+FormatFloat('0.000',(T2-T1)/Freq) ); End; With the ConnectTimeout set in code I get avg. times of 130-140ms, without it's about 5-15ms ...

    Read the article

  • using wget against protected site with NTLM

    - by Joey V.
    Trying to mirror a local intranet site and have found previous questions using 'wget'. It works great with sites that are anonymous, but I have not been able to use it against a site that is expecting username\password (IIS with Integrated Windows Authentication). Here is what I pass in: wget -c --http-user='domain\user' --http-password=pwd http://local/site -dv Here is the debug output (note I replaced some with dummy values obviously): Setting --verbose (verbose) to 1 DEBUG output created by Wget 1.11.4 on Windows-MSVC. --2009-07-14 09:39:04-- http://local/site Host `local' has not issued a general basic challenge. Resolving local... seconds 0.00, x.x.x.x Caching local = x.x.x.x Connecting to local|x.x.x.x|:80... seconds 0.00, connected. Created socket 1896. Releasing 0x003e32b0 (new refcount 1). ---request begin--- GET /site/ HTTP/1.0 User-Agent: Wget/1.11.4 Accept: */* Host: local Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 401 Access Denied Server: Microsoft-IIS/5.1 Date: Tue, 14 Jul 2009 13:39:04 GMT WWW-Authenticate: Negotiate WWW-Authenticate: NTLM Content-Length: 4431 Content-Type: text/html ---response end--- 401 Access Denied Closed fd 1896 Unknown authentication scheme. Authorization failed.

    Read the article

  • Load testing multipart form

    - by JacobM
    I'm trying to load-test a Rails application using JMeter. A critical part of the application involves a form that includes both text inputs and file uploads. It works fine in a browser, but when I try to post that page in JMeter, Rails is saving all of the parts of the multipart form as temp files, which causes things to break when it's looking for a string and gets a tempfile instead. It appears that the difference is that, from a browser, the piece of the multipart request that contains a text input looks like this: -----------------------------7d93b4186074c Content-Disposition: form-data; name="field_name" test -----------------------------7d93b4186074c while from JMeter it looks like this: -----------------------------7d159c1302d0y0 Content-Disposition: form-data; name="field_name" Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit test -----------------------------7d159c1302d0y0 So apparently Rails sees the former and interprets it as a plain text value and treats it as a string, but sees the latter and saves it to a temp file. I have not been able to find a setting to convince JMeter not to send the additional headers in the multipart form for non-file fields. Is there a way to convince Rails to ignore those headers and treat the text/plain text as strings instead of text files? Or a quick way to put a filter in front of my controller that will strip the extra headers? Alternately, is there a better tool to load-test a Rails application that includes file upload?

    Read the article

  • Porting a select loop application to Android with NDK. Design question.

    - by plaisthos
    Hi, I have an network application which uses a select loop like this: bool shutdown=false; while (!shutdown) { [do something] select(...,timeout); } THe main loop cannot work like this in an Android application anymore since the application needs to receive Intents, need to handle GUI, etc. I think I have basically three possibilities: Move the main loop to the java part of the application. Let the loop run in its own thread and somehow communicate from/to java. Screw Android <= 2.3 and use a native activity and use AInputQueue/ALooper instead of select. The first possibility is not easy since java has no select which works on fds. Simply using the select and return after each loop to java is not an elegant possibility either since that requires setting the timeout to something like 20ms to have a good response time in the java part of the program. The second probability sound nicer but I have do some communication between java and the c++/c part of the program. Things that cold work: Using a socket, kind of ugly. using native calls in the "java gui thread" and callback from native in the "c thread". Both threads need to have thread safe implementations but this is managable. I have not explored the third possibility but I think that it is not the way to go. I think I can hack something together which will work but I asking what is the best path to chose.

    Read the article

  • Cookies not present after using XMLHttpRequest

    - by Joe B
    I'm trying to make a bookmarklet to download videos off of YouTube, but I've come across a little problem. To detect the highest quality video available, I use a sort of brute force method, in which I make requests using the XMLHttpRequest object until a 404 isn't returned (I can't do it until a 200 ok is returned because YouTube redirects to a different server if the video is available, and the cross-domain policy won't allow me to access any of that data). Once a working URL is found, I simply set window.location to the URL and the download should start, right? Wrong. A request is made, but for reasons unknown to me, the cookies are stripped and YouTube returns a 403 access denied. This does not happen if the XML requests aren't made before it, i.e. if I just set the window.location to the URL everything works fine, it's when I do the XMLHttpRequest that the cookies aren't sent. It's hard to explain so here's the script: var formats = ["37", "22", "35", "34", "18", ""]; var url = "/get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) + "&fmt="; for (var i = 0; i < formats.length; i++) { xmlhttp = new XMLHttpRequest; xmlhttp.open("HEAD", url + formats[i], false); xmlhttp.send(null); if (xmlhttp.status != 404) { document.location = url + formats[i]; break } } That script does not send the cookies after setting the document.location and thus does not work. However, simply doing this: document.location = /get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) DOES send the cookies along with the request, and does work. The only downside is I can't automatically detect the highest quality, I just have to try every "fmt" parameter manually until I get it right. So my question is: why is the XMLHttpRequest object removing cookies from subsequent requests? This is the first time I've ever done anything in JS by the way, so please, go easy on me. ;)

    Read the article

  • Progress Bar design patterns?

    - by shoosh
    The application I'm writing performs a length algorithm which usually takes a few minutes to finish. During this time I'd like to show the user a progress bar which indicates how much of the algorithm is done as precisely as possible. The algorithm is divided into several steps, each with its own typical timing. For instance- initialization (500 milli-sec) reading inputs (5 sec) step 1 (30 sec) step 2 (3 minutes) writing outputs (7 sec) shutting down (10 milli-sec) Each step can report its progress quite easily by setting the range its working on, say [0 to 150] and then reporting the value it completed in its main loop. What I currently have set up is a scheme of nested progress monitors which form a sort of implicit tree of progress reporting. All progress monitors inherit from an interface IProgressMonitor: class IProgressMonitor { public: void setRange(int from, int to) = 0; void setValue(int v) = 0; }; The root of the tree is the ProgressMonitor which is connected to the actual GUI interface: class GUIBarProgressMonitor : public IProgressMonitor { GUIBarProgressMonitor(ProgressBarWidget *); }; Any other node in the tree are monitors which take control of a piece of the parent progress: class SubProgressMonitor : public IProgressMonitor { SubProgressMonitor(IProgressMonitor *parent, int parentFrom, int parentLength) ... }; A SubProgressMonitor takes control of the range [parentFrom, parentFrom+parentLength] of its parent. With this scheme I am able to statically divide the top level progress according to the expected relative portion of each step in the global timing. Each step can then be further subdivided into pieces etc' The main disadvantage of this is that the division is static and it gets painful to make changes according to variables which are discovered at run time. So the question: are there any known design patterns for progress monitoring which solve this issue?

    Read the article

  • WPF Exposing a calculated property for binding (as DependencyProperty)

    - by kubal5003
    Hello, I have a complex WPF control that for some reasons (ie. performance) is not using dependency properties but simple C# properties (at least at the top level these are exposed as properties). The goal is to make it possible to bind to some of those top level properties - I guess I should declare them as DPs.(right? or is there some other way to achieve this? ) I started reading on MSDN about DependencyProperties and DependencyObjects and found an example: public class MyStateControl : ButtonBase { public MyStateControl() : base() { } public Boolean State { get { return (Boolean)this.GetValue(StateProperty); } set { this.SetValue(StateProperty, value); } } public static readonly DependencyProperty StateProperty = DependencyProperty.Register( "State", typeof(Boolean), typeof(MyStateControl),new PropertyMetadata(false)); } If I'm right - this code enforces the property to be backed up by DependencyProperty which restricts it to be a simple property with a store(from functional point of view, not technically) instead of being able to calculate the property value each time getter is called and setting other properties/fields each time setter is called. What can I do about that? Is there any way I could make those two worlds meet at some point?

    Read the article

  • Issues with cross-domain uploading

    - by meder
    I'm using a django plugin called django-filebrowser which utilizes uploadify. The issue I'm having is that I'm hosting uploadify.swf on a remote static media server, whereas my admin area is on my django server. At first, the browse button wouldn't invoke my browser's upload. I fixed this by modifying the sameScriptAccess to always instead of sameDomain. Now the progress bar doesn't move at all, I probably have to enable some server setting for cross domain file uploading, or most likely actually host a separate upload script on my media server. I thought I could solve this by adding a crossdomain.xml to enable any site at the root of both servers, but that doesn't seem to solve it. $(document).ready(function() { $('#id_file').uploadify({ 'uploader' : 'http://media.site.com:8080/admin/filebrowser/uploadify/uploadify.swf', 'script' : '/admin/filebrowser/upload_file/', 'scriptData' : {'session_key': '...'}, 'checkScript' : '/admin/filebrowser/check_file/', 'cancelImg' : 'http://media.site.com:8080/admin/filebrowser/uploadify/cancel.png', 'auto' : false, 'folder' : '', 'multi' : true, 'fileDesc' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'fileExt' : '*.html;*.py;*.js;*.css;*.jpg;*.jpeg;*.gif;*.png;*.tif;*.tiff;*.mp3;*.mp4;*.wav;*.aiff;*.midi;*.m4p;*.mov;*.wmv;*.mpeg;*.mpg;*.avi;*.rm;*.pdf;*.doc;*.rtf;*.txt;*.xls;*.csv;', 'sizeLimit' : 10485760, 'scriptAccess' : 'always', //'scriptAccess' : 'sameDomain', 'queueSizeLimit' : 50, 'simUploadLimit' : 1, 'width' : 300, 'height' : 30, 'hideButton' : false, 'wmode' : 'transparent', translations : { browseButton: 'BROWSE', error: 'An Error occured', completed: 'Completed', replaceFile: 'Do you want to replace the file', unitKb: 'KB', unitMb: 'MB' } }); $('input:submit').click(function(){ $('#id_file').uploadifyUpload(); return false; }); }); The page I'm viewing this on is http://site.com/admin/filebrowser/browse on port 80.

    Read the article

  • c# ListView unable to check a checkbox! Strange problem.

    - by Kurt
    This is a strange problem, I've not added much code as I don't know were to start. I have a listview control in virtual mode, if I filter the listview to show me all people called John, I then see 3 users called John, I then cancel the filter setting all values to null and return all data to the listview, I now have several hundred items in the list but I can only see 30 on screen unless I scroll down the listview. I then use the code below to check a checkbox in each row, all get checked apart from the 3 Johns but if I can see 1 of the 3 Johns in listview without scrolling and then run the code below the visible John is checked. for (int i = 0; i < this._items.Count; i++) { this._items[i].Checked = true; } I have checked the status of the checkbox just before it is checked in the above code and if John is visible then the checkbox believes it is unchecked (false), if it is not visible it belives it is checked (true). So having one visible John on screen the checkbox looks unchecked and running a test proves it is unchecked, for the two Johns I can't see they believe they are checked but if I scroll down so I can see them they aren't. Any ideas? Thanks

    Read the article

  • JBoss 5.1 binds to host address while run in vserver with -b <guest address>

    - by bart cane
    Hello, while starting JBoss 5.1.0.GA in virtual server machine on Debian (linux-VServer technology) I get the following error: ERROR [org.jboss.kernel.plugins.dependency.AbstractKernelController] (main) Error installing to Start: name=jboss.remoting:protocol=rmi,service=JMXConnectorServer state=Create mode=Manual requiredState=Installed java.io.IOException: Cannot bind to URL [rmi://10.1.2.11:1090/jmxconnector]: javax.naming.NoPermissionException [Root exception is java.rmi.ServerException: RemoteException occurred in server thread; nested exception is: java.rmi.AccessException: Registry.Registry.bind disallowed; origin /AA.BB.CC.DD is non-local host] where AA.BB.CC.DD is host machine name, 10.1.2.11 is vserver guest with JBoss and JBoss is started with -b 10.1.2.11 (I also tried -Djboss.bind.address=10.1.2.11 - the same result). 10.1.2.11 is bound to dummy2 interface on host (serving 10.1.2.1 network). The root exception is strange - why JBoss wants to bind to host address AA.BB.CC.DD? There were no problems with 4.2.3.GA on the same machine, also started with -b 10.1.2.11. It starts correctly when no params present - binds to localhost and everything is ok, but it MUST be bound to 10.1.2.11 to be visible by Apache on another vserver guest, acting as proxy. I thought that it can be fixed by setting net.ipv4.conf.all.promote_secondaries=1 via sysctl (was 0) but it didn't help much. Has anyone had such problem? Regards, bart

    Read the article

  • Android: writing a file to sdcard

    - by Sumit M Asok
    I'm trying to write a file from an Http post reply to a file on the sdcard. Everything works fine until the byte array of data is retrieved. I've tried setting WRITE_EXTERNAL_STORAGE permission in the manifest and tried many different combinations of tutorials I found on the net. All I could find was using the openFileOutput("",MODE_WORLD_READABLE) method, of the activity but how my app writes file is by using a thread. Specifically, a thread is invoked from another thread when a file has to be written, so giving an activity object didn't work even though I tried it. The app has come a long way and I cannot change how the app is currently written. Please, someone help me? CODE: File file = new File(bgdmanip.savLocation); FileOutputStream filecon = null; filecon = new FileOutputStream(file); // bgdmanip.savLocation holds the whole files path byte[] myByte; myByte = Base64Coder.decode(seReply); Log.d("myBytes", String.valueOf(myByte)); bos.write(myByte); filecon.write(myByte); myvals = x * 11024; seReply is a string reply from HttpPost response. the second set of code is looped with reference to x. the file is created but remains 0 bytes

    Read the article

  • Problem with Tapestry palette's arrow icons in IE8

    - by JellyHead
    I'm using Tapestry to create pages for a web app, and have been using the palette component to add/delete items to/from a group. The page looks great in Firefox (Tapestry seems biased towards Firefox), but my customers will all be using Internet Explorer (any versions from 6, 7, & 8) and in IE8, the disabled arrow buttons look awful. In Firefox, they are faded, using an opacity setting of 25%, but this doesn't work in IE8 and instead you get a faded image with an ugly black border around the image. In tapestry-core's stylesheet (default.css), you have the following for a disabled arrow button. DIV.t-palette-controls BUTTON[disabled] IMG { filter: alpha(opacity = 25); -moz-opacity: .25; } These are clearly out of date, as -moz-opacity is no longer supported by Firefox (use opacity: 25 instead). The problem is with filter: "alpha(opacity = 25);". If I remove this, the arrows look fine in IE8, but they are not faded. I got the magic instruction: -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(opacity=25)"; from various websites, but putting this in does not work either - the arrow icons are ugly again. The icon itself (distributed with Tapestry) just seems to be a regular PNG, but I'm not an expert on image formats, so maybe there's a problem there? Anyone else had this problem?

    Read the article

  • UniqueConstraint in EmbeddedConfiguration

    - by LantisGaius
    I just started using db4o on C#, and I'm having trouble setting the UniqueConstraint on the DB.. here's the db4o configuration static IObjectContainer db = Db4oEmbedded.OpenFile(dbase.Configuration(), "data.db4o"); static IEmbeddedConfiguration Configuration() { IEmbeddedConfiguration dbConfig = Db4oEmbedded.NewConfiguration(); // Initialize Replication dbConfig.File.GenerateUUIDs = ConfigScope.Globally; dbConfig.File.GenerateVersionNumbers = ConfigScope.Globally; // Initialize Indexes dbConfig.Common.ObjectClass(typeof(DAObs.Environment)).ObjectField("Key").Indexed(true); dbConfig.Common.Add(new Db4objects.Db4o.Constraints.UniqueFieldValueConstraint(typeof(DAObs.Environment), "Key")); return dbConfig; } and the object to serialize: class Environment { public string Key { get; set; } public string Value { get; set; } } everytime I get to commiting some values, an "Object reference not set to an instance of an object." Exception pops up, with a stack trace pointing to the UniqueFieldValueConstraint. Also, when I comment out the two lines after the "Initialize Indexes" comment, everything runs fine (Except you can save non-unique keys, which is a problem)~ Commit code (In case I'm doing something wrong in this part too:) public static void Create(string key, string value) { try { db.Store(new DAObs.Environment() { Key = key, Value = value }); db.Commit(); } catch (Db4objects.Db4o.Events.EventException ex) { System.Console.WriteLine (DateTime.Now + " :: Environment.Create\n" + ex.InnerException.Message +"\n" + ex.InnerException.StackTrace); db.Rollback(); } } Help please? Thanks in advance~

    Read the article

  • URL Encoding using C#

    - by masfenix
    I have an application which I've developed for a friend. It sends a POST request to the VB forum software and logs someone in (with out setting cookies or anything). Once the user is logged in I create a variable that creates a path on their local machine. c:\tempfolder\date\username The problem is that some usernames are throwing "Illegal chars" exception. For example if my username was mas|fenix it would throw an exception.. Path.Combine( _ Environment.GetFolderPath(System.Environment.SpecialFolder.CommonApplicationData), _ DateTime.Now.ToString("ddMMyyhhmm") + "-" + form1.username) I don't want to remove it from the string, but a folder with their username is created through FTP on a server. And this leads to my second question. If I am creating a folder on the server can I leave the "illegal chars" in? I only ask this because the server is Linux based, and I am not sure if Linux accepts it or not.. EDIT: It seems that URL encode is NOT what I want.. Here's what I want to do: old username = mas|fenix new username = mas%xxfenix Where %xx is the ASCII value or any other value that would easily identify the character.

    Read the article

  • How Can I Prevent Memory Leaks in IE Mobile?

    - by Jake Howlett
    Hi All, I've written an application for use offline (with Google Gears) on devices using IE Mobile. The devices are experiencing memory leaks at such a rate that the device becomes unusable over time. The problem page fetches entries from the local Gears database and renders a table of each entry with a link in the last column of each row to open the entry ( the link is just onclick="open('myID')" ). When they've done with the entry they return to the table, which is RE-rendered. It's the repeated building of this table that appears to be the problem. Mainly the onclick events. The table is generated in essence like this: var tmp=""; for (var i=0; i<100; i++){ tmp+="<tr><td>row "+i+"</td><td><a href=\"#\" id=\"LINK-"+i+"\""+ " onclick=\"afunction();return false;\">link</a></td></tr>"; } document.getElementById('view').innerHTML = "<table>"+tmp+"</table>"; I've read up on common causes of memory leaks and tried setting the onclick event for each link to "null" before re-rendering the table but it still seems to leak. Anybody got any ideas? In case it matters, the function being called from each link looks like this: function afunction(){ document.getElementById('view').style.display="none"; } Would that constitute a circular reference in any way? Jake

    Read the article

  • Converting from CVS to SVN to Hg, does 'hg convert' require pointing to an SVN checkout or just a re

    - by Terry
    As part of migrating full CVS history to Hg, I've used cvs2svn to create an SVN repo in a local directory. It's first level directory structure is: 2010-04-21 09:39 AM <DIR> . 2010-04-21 09:39 AM <DIR> .. 2010-04-21 09:39 AM <DIR> locks 2010-04-21 09:39 AM <DIR> hooks 2010-04-21 09:39 AM <DIR> conf 2010-04-21 09:39 AM 229 README.txt 2010-04-21 11:45 AM <DIR> db 2010-04-21 09:39 AM 2 format 2 File(s) 231 bytes After setting up hg and the convert extension and attempting the convert, I get the following on convert: C:\>hg convert file://localhost/Users/terry/Desktop/repoSVN assuming destination repoSVN-hg initializing destination repoSVN-hg repository file://localhost/Users/terry/Desktop/repoSVN does not look like a CVS checkout file://localhost/Users/terry/Desktop/repoSVN does not look like a Git repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Subversion repo file://localhost/Users/terry/Desktop/repoSVN is not a local Mercurial repo file://localhost/Users/terry/Desktop/repoSVN does not look like a darcs repo file://localhost/Users/terry/Desktop/repoSVN does not look like a monotone repo file://localhost/Users/terry/Desktop/repoSVN does not look like a GNU Arch repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Bazaar repo file://localhost/Users/terry/Desktop/repoSVN does not look like a P4 repo abort: file://localhost/Users/terry/Desktop/repoSVN: missing or unsupported repository I have TortoiseHg installed. For info, hg version reports: Mercurial Distributed SCM (version 1.4.3) This version of Mercurial seems to have some svn bindings if library.zip in the install is to be believed. Do I need to do a checkout and point hg convert to it for this to work properly?

    Read the article

  • Can't get any speedup from parallelizing Quicksort using Pthreads

    - by Murat Ayfer
    I'm using Pthreads to create a new tread for each partition after the list is split into the right and left halves (less than and greater than the pivot). I do this recursively until I reach the maximum number of allowed threads. When I use printfs to follow what goes on in the program, I clearly see that each thread is doing its delegated work in parallel. However using a single process is always the fastest. As soon as I try to use more threads, the time it takes to finish almost doubles, and keeps increasing with number of threads. I am allowed to use up to 16 processors on the server I am running it on. The algorithm goes like this: Split array into right and left by comparing the elements to the pivot. Start a new thread for the right and left, and wait until the threads join back. If there are more available threads, they can create more recursively. Each thread waits for its children to join. Everything makes sense to me, and sorting works perfectly well, but more threads makes it slow down immensely. I tried setting a minimum number of elements per partition for a thread to be started (e.g. 50000). I tried an approach where when a thread is done, it allows another thread to be started, which leads to hundreds of threads starting and finishing throughout. I think the overhead was way too much. So I got rid of that, and if a thread was done executing, no new thread was created. I got a little more speedup but still a lot slower than a single process. The code I used is below. http://pastebin.com/UaGsjcq2 Does anybody have any clue as to what I could be doing wrong?

    Read the article

  • How to access the remote OPC server programatically ?

    - by Shailesh Jaiswal
    I have downloaded & installed the OPCDA.NET client component evaluation & XMLDA.NET client component evaluation. It provides some C# samples for browsing the available OPC Server, connecting to the OPC server, & browsing the available items on the server. I know the programatic way in which we can access the local OPC server. It is provided in the sample C# applications. I have installed the OPC server on another machine ( remote machine ). I have done all the required setting related to the 'dcomcnfg' utility. I can access the remote OPC server from client machine by using the Test Client provided by the OPCDA.NET client component evaluation & XMLDA.NET client component evaluation. But I am unaware of how this can be done programmatically. In the available C# samples I found no such programatic part (coding ) in which we can access the remote OPC server. Can you provide me the code through which I can browse the available remote machines in my network, available OPC server on each machine after selecting the specific machine name, connecting to the OPC server & browsing the available items on the server ? or Can you provide me the any link through which I can resolve the above issue ?

    Read the article

  • How do I encapsulate form/post/validation[/redirect] in ViewUserControl in ASP.Net MVC 2

    - by paul
    What I am trying to achieve: encapsulate a Login (or any) Form to be reused across site post to self when Login/validation fails, show original page with Validation Summary (some might argue to just post to Login Page and show Validation Summary there; if what I'm trying to achieve isn't possible, I will just go that route) when Login succeeds, redirect to /App/Home/Index also, want to: stick to PRG principles avoid ajax keep Login Form (UserController.Login()) as encapsulated as possible; avoid having to implement HomeController.Login() since the Login Form might appear elsewhere All but the redirect works. My approach thus far has been: Home/Index includes Login Form: <%Html.RenderAction("Login","User");%> User/Login ViewUserControl<UserLoginViewModel> includes: <%=Html.ValidationSummary("") % using(Html.BeginForm()){} includes hidden form field "userlogin"="1" public class UserController : BaseController { ... [AcceptPostWhenFieldExists(FieldName = "userlogin")] public ActionResult Login(UserLoginViewModel model, FormCollection form){ if (ModelState.IsValid) { if(checkUserCredentials()) { setUserCredentials() return this.RedirectToAction<Areas.App.Controllers.HomeController>(x = x.Index()); } else { return View(); } } ... } Works great when: ModelState or User Credentials fail -- return View() does yield to Home/Index and displays appropriate validation summary. (I have a Register Form on the same page, using the same structure. Each form's validation summary only shows when that form is submitted.) Fails when: ModelState and User Credentials valid -- RedirectToAction<>() gives following error: "Child actions are not allowed to perform redirect actions." It seems like in the Classic ASP days, this would've been solved with Response.Buffer=True. Is there an equivalent setting or workaround now? Btw, running: ASP.Net 4, MVC 2, VS 2010, Dev/Debugging Web Server I hope all of that makes sense. So, what are my options? Or where am I going wrong in my approach? tia!

    Read the article

  • Visual Studio project using the Quality Center API remains in memory

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • Opening href in jQuery Dialog

    - by Phil
    Okay, so I've got the following code to create a dialog of a div within a page: $('#modal').dialog({ autoOpen: false, width: 600, height: 450, modal: true, resizable: false, draggable: false, title: 'Enter Data', close: function() { $("#modal .entry_date").datepicker('hide'); } }); $('.modal').click(function() { $('#modal').dialog('open'); }); All working fine. But what I want is to also be able to open a link in a dialog window, kinda like... <a href="/path/to/file.html" class="modal">Open Me!!</a> I've done this before by hardcoding the path: $('#modal').load('/path/to/file.html').dialog('open'); but we can't hardcode the path in the javascript (as there will be multiple coming from the database) and I'm struggling to understand how to get this to work. I'm also pretty sure that the answer is really obvious, and I'm merely setting myself up to be humbled by the clever folk here at StackOverflow, but I've scratched my head for long enough this afternoon, so my ego has been put away, and hopefully someone can point me in the right direction... Thanks Phil

    Read the article

  • ClickOnce permissions

    - by stephenfalken
    We recently updated our main website. This included creating a new directory to hold the new site; then, some of the existing subdirectories needed to be copied over. Some of the virtual directories below the main site are clickonce publishing locations. These have been 100% successful publishing locations for 3 years now. We would update the application in Visual Studio and then publish... painless. Very long story short, since we've copied the directories to the new main site location on disk, all of our clickonce sites except one will no longer publish. They all fail with an error saying "you are not authorized to perform the current operation". This is immeidately after we set permissions to Full Control for my domain user group. I've checked everything I know how to check as far as permissions go and made the non-working ones' permissions match the one that does work, but no joy. We had problems on Friday and I fixed the one site that does work, but I can't remember how I fixed it; all I remember is that it took a long time screwing with it to make it work. Could there be some arcane setting in IIS that has been omitted? Is there a simple list of things to check anywhere on the Net? ClickOnce information is scattered among 50,000 URLs and I haven't been able to figure it out again. Thanks

    Read the article

  • Silverlight DataForm Memory Leak

    - by Andrew Garrison
    Some Background I have noticed that setting the EditTemplate of a DataForm (from the Silverlight Toolkit) can cause the DataForm to not be garbage collected. Consequently, the parent control of the DataForm cannot be garbage collected either, causing a very significant memory leak. Here's some XAML which demonstrates the case. <toolkit:DataForm HorizontalAlignment="Stretch" Margin="10" VerticalAlignment="Stretch"> <toolkit:DataForm.EditTemplate> <DataTemplate> <toolkit:DataField Label="Dummy Binding:"> <TextBox Text="{Binding DummyBinding, Mode=TwoWay}" /> </toolkit:DataField> </DataTemplate> </toolkit:DataForm.EditTemplate> </toolkit:DataForm> I have opened an issue on CodePlex. The isssue has an attachment which has a project which desmonstrates the case. So, My Question Is Has anyone else encountered this issue? More importantly, does anyone know of any workarounds? How can I force this DataForm to be garbage collected?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >