Search Results

Search found 17847 results on 714 pages for 'virtual disk'.

Page 655/714 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • How to limit Access-Control-Allow-Origin to a specific path?

    - by coderama
    I have a website that servies ads via javascript. So, I basically allow the user to include my script.... : <script src="http://www.example.com/ads.js" ></script> <script> MYADDS.insertAdvert(); </script> The problem is, I kept getting: "No Access-Control-Allow-Origin" Errors. That was until I added this to my htaccess file: <IfModule mod_headers.c> Header set Access-Control-Allow-Origin "*" </IfModule> Problem is, this opens up my entire site and is probably a security risk. So, seeing as the ads.js file actually only does an ajax request to: http://www.example.com/place/where/my/adds/are/fed/from How can I make the above htaccess rule only apply to that path? Keep in mind, it's not an actualy directory, so I can't put the htaccess file in that folder. It's actually a "virtual path". The site is built using Laravel and therefore does the typical laravel path rewriting. Here's teh full htaccess file: <IfModule mod_rewrite.c> <IfModule mod_negotiation.c> Options -MultiViews </IfModule> RewriteEngine On # Redirect Trailing Slashes... RewriteRule ^(.*)/$ /$1 [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] </IfModule> Any ideas how to do this?

    Read the article

  • One-to-many Associations Empty Columns Issue (Ext on Rails)

    - by Joe
    I'm playing with rewriting part of a web application in Rails + Ext. However, I'm having trouble getting an associated models' name to display in the grid view. I've been able to successfully convert several models and arrange the views nicely using tabs and Ext's layout helpers. However, I'm in the middle of setting up an association -- I've followed along with Jon Barket's tutorial on how to do this using Ext -- and I've made all the Rails and JS changes suggested (with appropriate name changes for my models,) the result being that the combo box is now being correctly populated with the names of the associated models, and changes are actually written correctly to database, BUT the data doesn't show up in the column, it's just empty. However, the correct data is there in the 'detail' view. Really just wondering if anyone else ran into this, or had any thoughts on what could be happening. Definitely willing to post code if requested; just note that (AFAIK) my changes follow the tutorial pretty closely. Thanks in advance! UPDATE: Alright, slight progress - kind of. I can get the associated model id # displaying properly -- just by modifying the column model slightly. But I can't get the virtual attribute displayed in the main table (in Jon's example it's country_name.) It still goes blank when I change the data source for that column from dataIndex: 'model[associated_model_id]' to dataIndex: 'virtual_attributes[associated_model_name]' ANOTHER UPDATE: Bump. Has NOBODY here tried integrating Ext with Rails?

    Read the article

  • Why does the VS2005 debugger not report "base." values properly? (was "Why is this if statement fail

    - by Rawling
    I'm working on an existing class that is two steps derived from System.Windows.Forms.Combo box. The class overrides the Text property thus: public override string Text { get { return this.AccessibilityObject.Value; } set { if (base.Text != value) { base.Text = value; } } } The reason given for that "get" is this MS bug: http://support.microsoft.com/kb/814346 However, I'm more interested in the fact that the "if" doesn't work. There are times where "base.Text != value" is true and yet pressing F10 steps straight to the closing } of the "set" and the Text property is not changed. I've seen this both by just checking values in the debugger, and putting a conditional breakpoint on that only breaks when the "if" statement's predicate is true. How on earth can "if" go wrong? The class between this and ComboBox doesn't touch the Text property. The bug above shouldn't really be affecting anything - it says it's fixed in VS2005. Is the debugger showing different values than the program itself sees? Update I think I've found what is happening here. The debugger is reporting value incorrectly (including evaluating conditional breakpoints incorrectly). To see this, try the following pair of classes: class MyBase { virtual public string Text { get { return "BaseText"; } } } class MyDerived : MyBase { public override string Text { get { string test = base.Text; return "DerivedText"; } } } Put a breakpoint on the last return statement, then run the code and access that property. In my VS2005, hovering over base.Text gives the value "DerivedText", but the variable test has been correctly set to "BaseText". So, new question: why does the debugger not handle base properly, and how can I get it to?

    Read the article

  • What is common between environments within a shell terminal session?

    - by Matt1776
    I have a custom shell script that runs each time a user logs in or identity is assumed, its been placed in /etc/profile.d and performs some basic env variable operations. Recently I added some code so that if screen is running it will reattach it without needing me to type anything. There are some problems however. If I log-in as root, and su - to another user, the code runs a second time. Is there a variable I can set when the code runs the first time that will prevent a second run of the code? I thought to write something to the disk but then I dont want to prevent the code from running if I begin a new terminal session. Here is the code in question. It first attempts to reattach - if unsuccessful because its already attached (as it might be on an interruped session) it will 'take' the session back. screen -r if [ -z "$STY" ]; then exec screen -dR fi Ultimately this bug prevents me from substituting user to another user because as soon as I do so, it grabs the screen session and puts me right back where I started. Pretty frustrating

    Read the article

  • ref and out parameters in C# and cannot be marked as variant.

    - by Water Cooler v2
    What does the statement mean? From here ref and out parameters in C# and cannot be marked as variant. 1) Does it mean that the following can not be done. public class SomeClass<R, A>: IVariant<R, A> { public virtual R DoSomething( ref A args ) { return null; } } 2) Or does it mean I cannot have the following. public delegate R Reader<out R, in A>(A arg, string s); public static void AssignReadFromPeonMethodToDelegate(ref Reader<object, Peon> pReader) { pReader = ReadFromPeon; } static object ReadFromPeon(Peon p, string propertyName) { return p.GetType().GetField(propertyName).GetValue(p); } static Reader<object, Peon> pReader; static void Main(string[] args) { AssignReadFromPeonMethodToDelegate(ref pReader); bCanReadWrite = (bool)pReader(peon, "CanReadWrite"); Console.WriteLine("Press any key to quit..."); Console.ReadKey(); } I tried (2) and it worked.

    Read the article

  • [PHP/MySQL] How to create text diff web app

    - by Adam Kiss
    Hello, idea I would like to create a little app for myself to store ideas (the thing is - I want it to do MY WAY) database I'm thinking going simple: id - unique id of revision in database text_id - identification number of text rev_id - number of revision flags - various purposes - expl. later title - self expl. desc - description text - self expl . flags - if I (i.e.) add flag rb;65, instead of storing whole text, I just said, that whenever I ask for latest revision, I go again in DB and check revision 65 Question: Is this setup the best? Is it better to store the diff, or whole text (i know, place is cheap...)? Does that revision flag make sense (wouldn't it be better to just copy text - more disk space, but less db and php processing. php I'm thinking, that I'll go with PEAR here. Although main point is to open-edit-save, possiblity to view revisions can't be that hard to program and can be life-saver in certain situations (good ideas got deleted, saving wrong version, etc...). However, I've never used PEAR in a long-time or full-project relationship, however, brief encounters in my previous experience left rather bad feeling - as I remember, it was too difficult to implement, slow and humongous to play with, so I don't know, if there's anything better. why? Although there are bazillions of various time/project/idea management tools, everything lacks something for me, whether it's sharing with users, syncing on more PCs, time-tracking, project management... And I believe, that this text diff webapp will be for internal use with various different tools later. So if you know any good and nice-UI-having project management app with support for text-heavy usage, just let me know, so I'll save my time for something better than redesigning the weel.

    Read the article

  • [PHP] Local/Dev/Live deployment - best workflow

    - by Adam Kiss
    Hello, situation We our little company with 3 people, each has a localhost webserver and most projects (previous and current) are on one PC network shared disk. We have virtual server, where some of our clients' sites and our site. Our standard workflow is: Coder PC ? Programmer localhost ? dev domain (client.company.com) ? live version (client.com) It often happens, that there are two or three guys working on same projects at the same time - one is on dev version, two are on localhost. When finished, we try to synchronize the files on dev version and ideally not to mess up any files, which *knock knock * doesn't happen often. And then one of us deploys dev version on live webserver. question we are looking for a way to simplify this workflow while updating websites - ideally some sort of diff uploader or VCS probably (Git/SVN/VCS/...), but we are not completely sure where to begin or what way would be ideal, therefore I ask you, fellow stackoverflowers for your experience with website / application deployment and recommended workflow. We probably will also need to use Mac in process, so if it won't be a problem, that would be even better. Thank you

    Read the article

  • Iteration through the HtmlDocument.All collection stops at the referenced stylesheet?

    - by Jonas
    Since "bug in .NET" is often not the real cause of a problem, I wonder if I'm missing something here. What I'm doing feels pretty simple. I'm iterating through the elements in a HtmlDocument called doc like this: System.Diagnostics.Debug.WriteLine("*** " + doc.Url + " ***"); foreach (HtmlElement field in doc.All) System.Diagnostics.Debug.WriteLine(string.Format("Tag = {0}, ID = {1} ", field.TagName, field.Id)); I then discovered the debug window output was this: Tag = !, ID = Tag = HTML, ID = Tag = HEAD, ID = Tag = TITLE, ID = Tag = LINK, ID = ... when the actual HTML document looks like this: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <html> <head> <title>Protocol</title> <link rel="Stylesheet" type="text/css" media="all" href="ProtocolStyle.css"> </head> <body onselectstart="return false"> <table> <!-- Misc. table elements and cell values --> </table> </body> </html> Commenting out the LINK tag solves the issue for me, and the document is completely parsed. The ProtocolStyle.css file exist on disk and is loaded properly, if that would matter. Is this a bug in .NET 3.5 SP1, or what? For being such a web-oriented framework, I find it hard to believe there would be such a major bug in it.

    Read the article

  • which sql consumes less memory

    - by prmatta
    Yesterday I asked a question on how to re-write sql to do selects and inserts in batches. I needed to do this to try and consume less virtual memory, since I need to move millions of rows here. The object is to move rows from Table B into Table A. Here are the ways I can think of doing this: SQL #1) INSERT INTO A (x, y, z) SELECT x, y, z FROM B b WHERE ... SQL #2) FOREACH SELECT x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #3) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... INSERT INTO A(x,y,z); END FOREACH; SQL #4) FOREACH SELECT FIRST 2000 x,y,z FROM B b WHERE ... AND NOT EXISTS IN (SELECT * FROM A) INSERT INTO A(x,y,z); END FOREACH; Are any of the above incorrect? The database is informix 11.5.

    Read the article

  • Doubt on pointer conversion

    - by Simone
    Suppose we have the following code: #include <iostream> struct A { virtual void f() { std::cout << "A::f()" << std::endl; } }; struct B: A { void f() { std::cout << "B::f()" << std::endl; } }; void to_A(void* voidp) { A* aptr = static_cast<A*>(voidp); aptr->f(); } void to_B(void* voidp) { B* bptr2 = static_cast<B*>(voidp); bptr2->f(); } int main() { B* bptr = new B; void* voidp = bptr; to_A(voidp); // prints B::f() to_B(voidp); // prints B::f() } is this code guaranteed to always work as in the code comments or is it UB? AFAIK it should be ok, but I'd like to be reassured.

    Read the article

  • Context.Items.Add on an XML return type?

    - by Scott Schluer
    So I have the following code contained within an HttpModule in an application I've been asked to support: app.Context.Response.ContentType = "text/xml"; app.Context.Items.Add("IpixRoomId", ipixRoomId); app.Context.Items.Add("IpixId", ipixId); app.Context.Response.Cache.SetCacheability(HttpCacheability.NoCache); app.Context.RewritePath(rewriteUrl, true); What's the purpose of adding data to Context.Items when the content type is XML? EDIT: For clarification, I'm calling up this URL: http://website.com/virtualtour/1971/6284/panorama2flash.swf I assume the SWF file (I know very little about Flash) makes another call to http://website.com/virtualtour/config.xml. The code I pasted above only executes on calls to config.xml. So since it's only the SWF file and config.xml being requested from the server, I'm a little confused. Can the .SWF file have access to HttpContext.Current.Items? Other than the HttpModule, there is no .NET involved in the code, it's a straight request to the SWF file which triggers a call to config.xml but it seems that those Context.Items contain the data needed to make the SWF file display the right virtual tour. I'm just missing where that link happens. It can't happen in the XML, so maybe in Flash?

    Read the article

  • How can I make PHP scripts timeout gracefully while waiting for long-running MySQL queries?

    - by Mark B
    I have a PHP site which runs quite a lot of database queries. With certain combinations of parameters, these queries can end up running for a long time, triggering an ugly timeout message. I want to replace this with a nice timeout message themed according to the rest of my site style. Anticipating the usual answers to this kind of question: "Optimise your queries so they don't run for so long" - I am logging long-running queries and optimising them, but I only know about these after a user has been affected. "Increase your PHP timeout setting (e.g. set_time_limit, max_execution_time) so that the long-running query can finish" - Sometimes the query can run for several minutes. I want to tell the user there's a problem before that (e.g. after 30 seconds). "Use register_tick_function to monitor how long scripts have been running" - This only gets executed between lines of code in my script. While the script is waiting for a response from the database, the tick function doesn't get called. In case it helps, the site is built using Drupal (with lots of customisation), and is running on a virtual dedicated Linux server on PHP 5.2 with MySQL 5.

    Read the article

  • How to ensure nginx serves a request from an external IP?

    - by Matt
    I have a strange situation, where my nginx setup stopped handling external requests. I'm pretty stuck. If I hit the domain without a subdomain, I properly get redirected, however, if I request the full url, that fails and doesn't log anything, anywhere. I am able to curl localhost on the server itself, however when I attempt to curl from an external machine, it fails with: curl: (7) couldn't connect to host I've also noticed that bots can get through, I've seen Google hit the log every now and then. My nginx.conf file: upstream mongrels { server 127.0.0.1:5000; } server { listen 80; server_name culini.com; rewrite ^/(.*) http://www.culini.com/$1 permanent; } # the server directive is nginx's virtual host directive. server { # port to listen on. Can also be set to an IP:PORT listen 80; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name www.culini.com; # doc root root /var/www/culini/current/public; log_format app '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" [$upstream_addr $upstream_response_time $upstream_status]'; # vhost specific access log access_log /var/www/culini/current/log/nginx.access.log app; error_log /var/www/culini/current/log/nginx.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; proxy_intercept_errors on; proxy_ignore_client_abort on; if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://mongrels; break; } } } Please, please, any help would be greatly appreciated.

    Read the article

  • (closed) nodejs responding an image (via pipe and response.end()) leads into strange behaviour

    - by johannesboyne
    What the Problem was: I had 2 different writeStreams! If WriteStram#1 is closed, the second should be closed too and then it all should be piped... BUT node is asynchronious so while one has been closed, the other one hasn't. Even the stream.end() was called... well you always should wait for the close event! thx guys for your help! I am at my wit's end. I used this code to pipe an image to my clients: req.pipe(fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res)) it does work but sometimes the image is not transferred completely. But no error is thrown neither on client side (browser) nor on server side (node.js). My second try was var img = fs.readFileSync(__dirname+'/imgen/cached_images/' + link); res.writeHead(200, { 'Content-Type' : 'image/png' }); res.end(img, 'binary'); but it leads to the same strange behaviour... Does anyone got a clue for me? Regards! (abstracted code...) var http = require('http'); http.createServer(function (req, res) { Imgen.generateNew( 'virtualtwins/www_leonardocampus_de/overview/28', 'www.leonardocampus.de', 'overview', '28', null, [], [], function (link) { fs.stat(__dirname+'/imgen/cached_images/' + link, function(err, file_info) { if (err) { console.log('err', err); } console.log('file info', file_info.size); res.writeHead(200, 'image/png'); fs.createReadStream(__dirname+'/imgen/cached_images/' + link).pipe(res); }); } ); }).listen(13337, '127.0.0.1'); Imgen.generateNew just creates a new file, saves it to the disk and gives back the path (link)

    Read the article

  • Globally define parts of virtualhost definition to reduce redundancy

    - by user1428900
    I have setup an apache server and this apache server points to a bunch of virtualhosts. The definition of the virtualhosts are as follows, <VirtualHost *:80> ServerName <url> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] # Enable client-side caching of resources ExpiresActive On ExpiresByType image/png "now plus 48 hours" ExpiresByType image/jpeg "now plus 48 hours" ExpiresByType image/gif "now plus 48 hours" ExpiresByType application/javascript "now plus 48 hours" ExpiresByType application/x-javascript "now plus 48 hours" ExpiresByType text/javascript "now plus 48 hours" ExpiresByType text/css "now plus 48 hours" </VirtualHost> . . . <VirtualHost *:80> ServerName <url> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] # Enable client-side caching of resources ExpiresActive On ExpiresByType image/png "now plus 48 hours" ExpiresByType image/jpeg "now plus 48 hours" ExpiresByType image/gif "now plus 48 hours" ExpiresByType application/javascript "now plus 48 hours" ExpiresByType application/x-javascript "now plus 48 hours" ExpiresByType text/javascript "now plus 48 hours" ExpiresByType text/css "now plus 48 hours" </VirtualHost> I have a ton of virtual host definitions similar to the examples shown above. Since, most of the definitions other than the ServerName are the same, I was wondering if there was a way to define these common definitions globally. I am new to apache configuration and I felt that as the number of virtualhost definitions increase my configuration file becomes longer and redundant.

    Read the article

  • BackgroundWorker acting bizarrely...

    - by vdh_ant
    Hi guys I'm working on some code that calls a service. This service call could fail and if it does I want the system to try again until it works or too much time has passed. I am wondering where I am going wrong as the following code doesn't seem to be working correctly... It randomly only does one to four loops... protected virtual void ProcessAsync(object data, int count) { var worker = new BackgroundWorker(); worker.DoWork += (sender, e) => { throw new InvalidOperationException("oh shiznit!"); }; worker.RunWorkerCompleted += (sender, e) => { //If an error occurs we need to tell the data about it if (e.Error != null) { count++; System.Threading.Thread.Sleep(count * 5000); if (count <= 10) { if (count % 5 == 0) this.Logger.Fatal("LOAD ERROR - The system can't load any data", e.Error); else this.Logger.Error("LOAD ERROR - The system can't load any data", e.Error); this.ProcessAsync(data, count); } } }; worker.RunWorkerAsync(); } Cheers Anthony

    Read the article

  • Speeding up a group by date query on a big table in postgres

    - by zaius
    I've got a table with around 20 million rows. For arguments sake, lets say there are two columns in the table - an id and a timestamp. I'm trying to get a count of the number of items per day. Here's what I have at the moment. SELECT DATE(timestamp) AS day, COUNT(*) FROM actions WHERE DATE(timestamp) >= '20100101' AND DATE(timestamp) < '20110101' GROUP BY day; Without any indices, this takes about a 30s to run on my machine. Here's the explain analyze output: GroupAggregate (cost=675462.78..676813.42 rows=46532 width=8) (actual time=24467.404..32417.643 rows=346 loops=1) -> Sort (cost=675462.78..675680.34 rows=87021 width=8) (actual time=24466.730..29071.438 rows=17321121 loops=1) Sort Key: (date("timestamp")) Sort Method: external merge Disk: 372496kB -> Seq Scan on actions (cost=0.00..667133.11 rows=87021 width=8) (actual time=1.981..12368.186 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 32447.762 ms Since I'm seeing a sequential scan, I tried to index on the date aggregate CREATE INDEX ON actions (DATE(timestamp)); Which cuts the speed by about 50%. HashAggregate (cost=796710.64..796716.19 rows=370 width=8) (actual time=17038.503..17038.590 rows=346 loops=1) -> Seq Scan on actions (cost=0.00..710202.27 rows=17301674 width=8) (actual time=1.745..12080.877 rows=17321121 loops=1) Filter: ((date("timestamp") >= '2010-01-01'::date) AND (date("timestamp") < '2011-01-01'::date)) Total runtime: 17038.663 ms I'm new to this whole query-optimization business, and I have no idea what to do next. Any clues how I could get this query running faster?

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • Which design pattern fits - strategy makes sense ?

    - by user554833
    --Bump *One desperate try to get someone's attention I have a simple database table that stores list of users who have subscribed to folders either by email OR to show up on the site (only on the web UI). In the storage table this is controlled by a number(1 - show on site 2- by email). When I am showing in UI I need to show a checkbox next to each of folders for which the user has subscribed (both email & on site). There is a separate table which stores a set of default subscriptions which would apply to each user if user has not expressed his subscription. This is basically a folder ID and a virtual group name. But, Email subscriptions do not count for applying these default groups. So if no "on site" subscription apply default group. Thats the rule. How about a strategy pattern here (Pseudo code) Interface ISubscription public ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionWithoutDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) Public class SubscriptionOnlyDefaultGroup Implement ArrayList GetSubscriptionData(Pass query object) does this even make sense? I would be more than glad for receive any criticism / help / notes. I am learning. Cheers

    Read the article

  • What happens to exinsting workspaces after upgrading to TFS 2010

    - by user351671
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • In Castle Windsor, can I register a Interface component and get a proxy of the implementation?

    - by Thiado de Arruda
    Lets consider some cases: _windsor.Register(Component.For<IProductServices>().ImplementedBy<ProductServices>().Interceptors(typeof(SomeInterceptorType)); In this case, when I ask for a IProductServices windsor will proxy the interface to intercept the interface method calls. If instead I do this : _windsor.Register(Component.For<ProductServices>().Interceptors(typeof(SomeInterceptorType)); then I cant ask for windsor to resolve IProductServices, instead I ask for ProductServices and it will return a dynamic subclass that will intercept virtual method calls. Of course the dynamic subclass still implements 'IProductServices' My question is : Can I register the Interface component like the first case, and get the subclass proxy like in the second case?. There are two reasons for me wanting this: 1 - Because the code that is going to resolve cannot know about the ProductServices class, only about the IProductServices interface. 2 - Because some event invocations that pass the sender as a parameter, will pass the ProductServices object, and in the first case this object is a field on the dynamic proxy, not the real object returned by windsor. Let me give an example of how this can complicate things : Lets say I have a custom collection that does something when their items notify a property change: private void ItemChanged(object sender, PropertyChangedEventArgs e) { int senderIndex = IndexOf(sender); SomeActionOnItemIndex(senderIndex); } This code will fail if I added an interface proxy, because the sender will be the field in the interface proxy and the IndexOf(sender) will return -1.

    Read the article

  • ecommerce - use server side code for hidden values in an html form

    - by bsarmi
    I'm trying to learn how to implement a donation form on a website using virtual merchant. The html code from their developer manual goes like this: <form action="https://www.myvirtualmerchant.com/VirtualMerchant/process.do" method="POST"> Your Total: $5.00 <br/> <input type="hidden" name="ssl_amount" value="5.00"> <br/> <input type="hidden" name="ssl_merchant_id" value="my_virtualmerchant_ID"> <input type="hidden" name="ssl_pin" value="my_PIN"> <input type="hidden" name="ssl_transaction_type" value="ccsale"> <input type="hidden" name="ssl_show_form" value="false"> Credit Card Number: <input type="text" name="ssl_card_number"> <br/> Expiration Date (MMYY): <input type="text" name="ssl_exp_date" size="4"> <br/> <br/> <input type="submit" value="Continue"> </form> I have that in an html file and it works fine, but they suggest that the merchant data (the input type="hidden" values) should be in a Server Side Code. I was looking at cURL but it'a all very new to me and I spent a couple of hours trying to find some guide or some sample code on how to accomplish that. Any suggestions or help is greatly appreciated. Thanks!

    Read the article

  • Implementing inotifycollectionchanged interface

    - by George
    Hello, I need to implement a collection with special capabilities. In addition, I want to bind this collection to a ListView, Therefore I ended up with the next code (I omitted some methods to make it shorter here in the forum): public class myCollection<T> : INotifyCollectionChanged { private Collection<T> collection = new Collection<T>(); public event NotifyCollectionChangedEventHandler CollectionChanged; public void Add(T item) { collection.Insert(collection.Count, item); OnCollectionChange(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, item)); } protected virtual void OnCollectionChange(NotifyCollectionChangedEventArgs e) { if (CollectionChanged != null) CollectionChanged(this, e); } } I wanted to test it with a simple data class: public class Person { public string GivenName { get; set; } public string SurName { get; set; } } So I created an instance of myCollection class as follows: myCollection<Person> _PersonCollection = new myCollection<Person>(); public myCollection<Person> PersonCollection { get { return _PersonCollection; } } The problem is that the ListView does not update when the collection updates although I implemented the INotifyCollectionChanged interface. I know that my binding is fine (in XAML) because when I use the ObservableCollecion class instead of myCollecion class like this: ObservableCollection<Person> _PersonCollection = new ObservableCollection<Person>(); public ObservableCollection<Person> PersonCollection { get { return _PersonCollection; } } the ListView updates What is the problem?

    Read the article

  • How can I simplify this user interface?

    - by Bears will eat you
    I'm writing an internal-tools webapp; one of the central pages in this tool has a whole bunch of related commands the user can execute by clicking one of a number of buttons on the page, like this: Ideally, all of the buttons would fit on one line. Ordinarily I'd do this by changing each widget from a button with a (sometimes long) text label to a simple, compact icon - e.g. could be replaced by a familiar disk icon: Unfortunately, I don't think I can do this for every button on this particular page. Some of the command buttons just don't have good visual analogs - "VDS List". Or, if I needed to add another button in the future for some other kind of list, I'd need two icons that both communicate "list-ness" and which list. So, I'm still considering this option, but I don't love it. So it's come time for me to add yet another button to this section (don't you love internal tools?). There's not enough room on that single line to fit the new button. Aside from the icon solution I already mentioned, what would be a good* way to simplify/declutter/reduce or otherwise improve this UI? *As per Jakob Nielsen's article, I'd like to think that a dropdown menu is not the solution.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >