Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 264/313 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • StructureMap and injecting IEnumerable<T>

    - by GiddyUpHorsey
    I'm new to StructureMap and have some existing code that I'm working with that uses StructureMap 2.5.4. There is a class that is constructed using StructureMap that has a constructor that takes IEnumerable<TCar> as a parameter. The registry has the following code. Scan(x => { x.TheCallingAssembly(); x.WithDefaultConventions(); x.AddAllTypesOf<ICar>(); } ); ForRequestedType<IEnumerable<ICar>>().TheDefault.Is.ConstructedBy( x => ObjectFactory.GetAllInstances<ICar>()); I'm writing a unit test and have obtained a nested container off the ObjectFactory and have injected an instance using the Inject method. One of the instances of ICar should receive the injected type in its constructor. However it wasn't working and I tracked that down to the ObjectFactory.GetAllInstances() call which doesn't use my nested container. How can I get this to work? I also read about StructureMap autowiring arrays and IEnumerable instances but I couldn't get it to work. Is there a better way to rewrite the above registry code so that an instance of IEnumerable<TCar> will be created and use the injected type from my nested container?

    Read the article

  • The fastest way to iterate through a collection of objects

    - by Trev
    Hello all, First to give you some background: I have some research code which performs a Monte Carlo simulation, essential what happens is I iterate through a collection of objects, compute a number of vectors from their surface then for each vector I iterate through the collection of objects again to see if the vector hits another object (similar to ray tracing). The pseudo code would look something like this for each object { for a number of vectors { do some computations for each object { check if vector intersects } } } As the number of objects can be quite large and the amount of rays is even larger I thought it would be wise to optimise how I iterate through the collection of objects. I created some test code which tests arrays, lists and vectors and for my first test cases found that vectors iterators were around twice as fast as arrays however when I implemented a vector in my code in was somewhat slower than the array I was using before. So I went back to the test code and increased the complexity of the object function each loop was calling (a dummy function equivalent to 'check if vector intersects') and I found that when the complexity of the function increases the execution time gap between arrays and vectors reduces until eventually the array was quicker. Does anyone know why this occurs? It seems strange that execution time inside the loop should effect the outer loop run time.

    Read the article

  • .NET Web Service hydrate custom class

    - by row1
    I am consuming an external C# Web Service method which returns a simple calculation result object like this: [Serializable] public class CalculationResult { public string Name { get; set; } public string Unit { get; set; } public decimal? Value { get; set; } } When I add a Web Reference to this service in my ASP .NET project Visual Studio is kind enough to generate a matching class so I can easily consume and work with it. I am using Castle Windsor and I may want to plug in other method of getting a calculation result object, so I want a common class CalculationResult (or ICalculationResult) in my solution which all my objects can work with, this will always match the object returned from the external Web Service 1:1. Is there anyway I can tell my Web Service client to hydrate a particular class instead of its generated one? I would rather not do it manually: foreach(var fromService in calcuationResultsFromService) { ICalculationResult calculationResult = new CalculationResult() { Name = fromService.Name }; yield return calculationResult; } Edit: I am happy to use a Service Reference type instead of the older Web Reference.

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • JUnit: checking if a void method gets called

    - by nkr1pt
    I have a very simple filewatcher class which checks every 2 seconds if a file has changed and if so, the onChange method (void) is called. Is there an easy way to check ik the onChange method is getting called in a unit test? code: public class PropertyFileWatcher extends TimerTask { private long timeStamp; private File file; public PropertyFileWatcher(File file) { this.file = file; this.timeStamp = file.lastModified(); } public final void run() { long timeStamp = file.lastModified(); if (this.timeStamp != timeStamp) { this.timeStamp = timeStamp; onChange(file); } } protected void onChange(File file) { System.out.println("Property file has changed"); } } @Test public void testPropertyFileWatcher() throws Exception { File file = new File("testfile"); file.createNewFile(); PropertyFileWatcher propertyFileWatcher = new PropertyFileWatcher(file); Timer timer = new Timer(); timer.schedule(propertyFileWatcher, 2000); FileWriter fw = new FileWriter(file); fw.write("blah"); fw.close(); Thread.sleep(8000); // check if propertyFileWatcher.onChange was called file.delete(); }

    Read the article

  • With NHibernate and Transaction do I rollback on commit failure or does it auto rollback on single c

    - by mattcodes
    I've built the following Dispose method for my Unit Of Work which essentially wraps the active NH session & transaction (transaction set as variable after opening session as to not be replaced if NH session gets new transaction after error) public void Dispose() { Func<ITransaction,bool> transactionStateOkayFunc = trans => trans != null && trans.IsActive && !trans.WasRolledBack; try { if(transactionStateOkayFunc(this.transaction)) { if (HasErrored) { transaction.Rollback(); } else { try { transaction.Commit(); } catch (Exception) { if(transactionStateOkayFunc(transaction)) transaction.Rollback(); throw; } } } } finally { if(transaction != null) transaction.Dispose(); if(session.IsOpen) session.Close(); } I can't help feeling that code is a little bloated, will a transaction automatically rollback is a discrete Commit fails in the case of non-nested transactions? Will Commit or Rollback automatically Dipose the transaction? If not will Session.Close() automatically dispose the associated transaction?

    Read the article

  • Cannot access Class methods from previous windows form - C#

    - by George
    I am writing an app, still, where I need to test some devices every minute for 30 minutes. It made sense to use a timer set to kick off every 60 secs and do whats required in the event handler. However, I need the app to wait for the 30 mins until I have finished with the timer since the following code alters the state of the devices I am trying to monitor. I obviously don't want to use any form of loop to do this. I thought of using another windows form, since I also display the progress, which will simply kick off the timer and wait until its complete. The problem I am having with this is that I use a device Class and cant seem to get access to the methods in the device class from the 2nd (3rd actually - see below) windows form. I have an initial windows form where I get input from the user, then call the 2nd windows form where it work out which tests need to be done and which device classes need to be used, and then I want to call the 3rd windows form to handle the timer. I will have up to 6-7 device classes and so wanted to only instantiate them when actually requiring them, from the 2nd form. Should I have put this logic into the 1st windows form (program class ??) ? Would I not still have the problem of not being able to access device class methods from there too ? Anyway, perhaps someone knows of a better way to do the checks every minute without the rest of the code executing (and changing the status of the devices) or how I should be accessing the methods in the app ?? Well that's the problem, I cant get that part of it to work correctly. Here is the definition for the calling form including the device class - namespace NdtStart { public partial class fclsNDTCalib : Form { NDTClass NDT = new NDTClass(); public fclsNDTCalib() (new fclsNDTTicker(NDT)).ShowDialog(); Here is the class def for the called form - namespace NdtStart { public partial class fclsNDTTicker : Form { public fclsNDTTicker() I tried lots but couldn't get the arguments to work.

    Read the article

  • PHP: retrieve all declared namespaces of a DOMElement

    - by soulmerge
    I am using the DOM extension to parse an xml file containing xml namespaces. I would have that namespace declarations are treated just like any other attribute, but my tests seem to disagree. I have a document that starts like this: <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns="http://purl.org/rss/1.0/" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:syn="http://purl.org/rss/1.0/modules/syndication/" xmlns:prism="http://purl.org/rss/1.0/modules/prism/" xmlns:admin="http://webns.net/mvcb/" > And a test code like this: $doc = new DOMDocument(); $doc->loadXml(file_get_contents('/home/soulmerge/tmp/rss1.0/recent.xml')); $root = $doc->documentElement; var_dump($root->tagName); # prints 'string(7) "rdf:RDF"' var_dump($root->attributes->item(0)); # prints 'NULL' var_dump($root->getAttributeNode('xmlns')); # prints 'object(DOMNameSpaceNode)#3 (0) {}' So the questions are: Does anyone know where could I find the documentation of DOMNameSpaceNode? A search on php.net does not yield any useful result. How do I extract all those namespace declarations from that DOMElement?

    Read the article

  • what are all the Optimize tricks that you know for asp.net code ?

    - by Aristos
    After some time of many code programming on asp.net, I discover the very big speed different between string and StringBuilder. I know that is very common and known but I just mention it for start. The second think that I have found to speed up the code, is to use the const, and not the static, for declare my configuration constants value (especial the strings). With the const, the compiler not create new object, but just place the value, on the point that you have ask it, but with the static declaration, is create a new memory object and keep its on the memory. My third trick, is when I search for string, I use hash values, and not the string itself. For example, if I need a List<string SomeValues, and place inside strings that I need to search them, I prefer to use List<int SomeHashValue, and I use the hash value to locate the strings. My forth thought that I was wandering, is if is better to place big strings in one line, or separate them in different lines with the + symbol to be more easy to read out. I make some tests and see that the compiler make a good job is some split the string, in many lines, using the + symbol. What other tricks/tips do you know and use on your programming to make it run faster, and maybe use less memory. Well I know, that some times, to make something run faster, you need more memory, more cache. My priority is on speed. Because Speed Counts.

    Read the article

  • Git Workflow With Capistrano

    - by jerhinesmith
    I'm trying to get my head around a good git workflow using capistrano. I've found a few good articles, but I'm either not grasping completely what they're suggesting (likely) or they're somewhat lacking. Here's kind of what I had in mind so far, but I get caught up when to merge back into the master branch (i.e. before moving to stage? after?) and trying to hook it into capistrano for deployments: Make sure you’re up to date with all the changes made on the remote master branch by other developers git checkout master git pull Create a new branch that pertains to the particular bug you're trying to fix git checkout -b bug-fix-branch Make your changes git status git add . git commit -m "Friendly message about the commit" So, this is usually where I get stuck. At this point, I have a master branch that is healthy and a new bug-fix-branch that contains my (untested -- other than unit tests) changes. If I want to push my changes to stage (through cap staging deploy), do I have to merge my changes back into the master branch (I'd prefer not to since it seems like master should be kept free of untested code)? Do I even deploy from master (or should I be tagging a release first and then modifying my production.rb file to deploy from that tag)? git-deployment seems to address some of these workflow issues, but I can't seem to find out how on earth it actually hooks into cap staging deploy and cap production deploy. Thoughts? I assume there's a likely canonical way to do this, but I either can't find it or I'm too new to git to recognize that I have found it. Help!

    Read the article

  • Looking for all-in-one drm/installer/CD creation kit.

    - by user30997
    The company I work for has a download manager in place that handles distribution, DRM, installation of our products - when a user gets them off our website. However, we're using an clunky system for packaging and protecting our products when we do press releases or make retail CDs. Part of the antiquation problem is the fact that the automated system that works with the installer- and DRM-creation software we have is a disaster that needs to be put out of my misery. The list of products that we currently produce, that I would like a new system MUST be capable of producing: Retail CDs, with a certain level of obfuscation to make copying difficult. Downloadable installers that time out after a few hours of use of the product. After the time has expired, removing and reinstalling the product will leave you still blocked from use. Installers that will fail to work after a certain date. I'd love to be able to just feed a tool the directory where a complete product resides and have the installer generated with a couple command-line operations. (The command-line issue is non-negotiable this well be called by an automated tool.) A single-solution package would be far preferable. Software with royalty-based or per-unit based licensing is not an option.

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • lock shared data using c#

    - by menacheb
    Hi, I have a program (C#) with a list of tests to do. Also, I have two thread. one to add task into the list, and one to read and remove from it the performed tasks. I'm using the 'lock' function each time one of the threads want to access to the list. Another thing I want to do is, if the list is empty, the thread who need to read from the list will sleep. and wake up when the first thread add a task to the list. Here is the code I wrote: ... List<String> myList = new List(); Thread writeThread, readThread; writeThread = new Thread(write); writeThread.Start(); readThraed = new Thread(read); readThread.Start(); ... private void write() { while(...) { ... lock(myList) { myList.Add(...); } ... if (!readThread.IsAlive) { readThraed = new Thread(read); readThread.Start(); } ... } ... } private void read() { bool noMoreTasks = false; while (!noMoreTasks) { lock (MyList)//syncronize with the ADD func. { if (dataFromClientList.Count > 0) { String task = myList.First(); myList.Remove(task); } else { noMoreTasks = true; } } ... } readThread.Abort(); } Apparently I did it wrong, and it's not performed as expected (The readTread does't read from the list). Does anyone know what is my problem, and how to make it right? Many thanks,

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • High Runtime for Dictionary.Add for a large amount of items

    - by aaginor
    Hi folks, I have a C#-Application that stores data from a TextFile in a Dictionary-Object. The amount of data to be stored can be rather large, so it takes a lot of time inserting the entries. With many items in the Dictionary it gets even worse, because of the resizing of internal array, that stores the data for the Dictionary. So I initialized the Dictionary with the amount of items that will be added, but this has no impact on speed. Here is my function: private Dictionary<IdPair, Edge> AddEdgesToExistingNodes(HashSet<NodeConnection> connections) { Dictionary<IdPair, Edge> resultSet = new Dictionary<IdPair, Edge>(connections.Count); foreach (NodeConnection con in connections) { ... resultSet.Add(nodeIdPair, newEdge); } return resultSet; } In my tests, I insert ~300k items. I checked the running time with ANTS Performance Profiler and found, that the Average time for resultSet.Add(...) doesn't change when I initialize the Dictionary with the needed size. It is the same as when I initialize the Dictionary with new Dictionary(); (about 0.256 ms on average for each Add). This is definitely caused by the amount of data in the Dictionary (ALTHOUGH I initialized it with the desired size). For the first 20k items, the average time for Add is 0.03 ms for each item. Any idea, how to make the add-operation faster? Thanks in advance, Frank

    Read the article

  • Google Analytics Script inside XML File

    - by mnml
    Hi, I would like to know If I can add my ecommerce analytics tags inside an xml formated file? <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXX-X']); _gaq.push(['_trackPageview']); _gaq.push(['_addTrans', '1234', // order ID - required 'Acme Clothing', // affiliation or store name '11.99', // total - required '1.29', // tax '5', // shipping 'San Jose', // city 'California', // state or province 'USA' // country ]); // add item might be called for every item in the shopping cart // where your ecommerce engine loops through each item in the cart and // prints out _addItem for each _gaq.push(['_addItem', '1234', // order ID - required 'DD44', // SKU/code 'T-Shirt', // product name 'Green Medium', // category or variation '11.99', // unit price - required '1' // quantity - required ]); _gaq.push(['_trackTrans']); //submits transaction to the Analytics servers (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(ga); })(); </script>

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • What arguments to use to explain why a SQL DB is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from MS SQL server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be espically when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurent access Performance with large ammounts of data Ammount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

  • Customizing the TFS 2008 build sequence to avoid compilation and deploy SSRS

    - by Andrew
    I'm trying to create a CI process for SQL Server Reporting Services. I am fairly new to TFS but quite experienced with MSBuild. In the past I've used a combination of MSBuild with Team City so the whole build process is more or less custom. Here lies the start of my problems, as the solution I am deploying only contains Report Server projects (rds), no compilation is required. I thought that I would override the the first default task that TFS runs (EndToEndIteration) to override the default TFS build sequence and inject my own. The first snag that I have come across is that the build always fails, how can I set the status of the build to success? Currently the EndToEndIteration task is very light and only has a message. Is this the best method to create a custom build process in TFS where compilation is not required? Or should I use the default sequence and override one of the hook tasks mentioned in http://msdn.microsoft.com/en-us/library/aa337604%28VS.80%29.aspx (ie: AfterCompile) The core steps that I'd like to achieve are: Bundle the RDL and datasource files Connect to the host server to register/deploy the reports Re-apply any subscriptions that previously existed Run tests to verify the deployment succeeded and is returning results as expected I have found another article on Report services deployment: http://stackoverflow.com/questions/88710/reporting-services-deployment But it doesn't mention the best practice for customizing the standard build process. Any help would be appreciated.

    Read the article

  • convert list of relative widths to pixel widths

    - by mkoryak
    This is a code review question more then anything. I have the following problem: Given a list of relative widths (no unit whatsoever, just all relative to each other), generate a list of pixel widths so that these pixel widths have the same proportions as the original list. input: list of proportions, total pixel width. output: list of pixel widths, where each width is an int, and the sum of these equals the total width. Code: var sizes = "1,2,3,5,7,10".split(","); //initial proportions var totalWidth = 1024; // total pixel width var sizesTotal = 0; for (var i = 0; i < sizes.length; i++) { sizesTotal += parseInt(sizes[i], 10); } if(sizesTotal != 100){ var totalLeft = 100;; for (var i = 0; i < sizes.length; i++) { sizes[i] = Math.floor(parseInt(sizes[i], 10) / sizesTotal * 100); totalLeft -= sizes[i]; } sizes[sizes.lengh - 1] = totalLeft; } totalLeft = totalWidth; for (var i = 0; i < sizes.length; i++) { widths[i] = Math.floor(totalWidth / 100 * sizes[i]) totalLeft -= widths[i]; } widths[sizes.lenght - 1] = totalLeft; //return widths which contains a list of INT pixel sizes

    Read the article

  • How to trigger the event together on each two deferent class.

    - by XBasic3000
    I have two object class on a single unit, is it posible to trigger the two events? let say the FIRSTCLASS event is fired, The SECONDCLASS also will fired? Assuming...... //{Class 1}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMyFirstClass = Class(Tcomponent) private .... public .... OnEventTrigger : TOnEventTrigger read Fevent write Fevent; end; procedure TMyFirstClass.FEvnt(Sender : Tobject; Value :integer); begin // here is normaly triggers the event // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); // POSTMessage(GetForegroundWindow,WM_USER+3,0,0); // this is what i did here to get the result of FSomevalue // but this is not ideal. It work only on focus window. end; //{Class 2}------------------------------------------------------------- type TOnEventTrigger = procedure(Sender : Tobject; Value :integer); TMySecondClass = Class(Tobject) private .... public .... OnEventTrigger : TOnEventTrigger; read Fevent write Fevent; end; procedure TMySecondClass.FEvnt(Sender : Tobject; Value :integer); begin // I wanted here to trigger, whenenver the above is fired // if Assigned(OnEventTrigger) then OnEventTrigger(Self,FSomevalue); end;

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • Group variables in a boxplot in R

    - by tao.hong
    I am trying to generate a boxplot whose data come from two scenarios. In the plot, I would like to group boxes by their names (So there will be two boxes per variable). I know ggplot would be a good choice. But I got errors which I could not figure out. Can anyone give me some suggestions? sensitivity_out1 structure(c(0.0522902104339716, 0.0521369824334004, 0.0520240345973737, 0.0519818337359876, 0.051935071418996, 0.0519089404325544, 0.000392698277338341, 0.000326135474295325, 0.000280863338343747, 0.000259631566041935, 0.000246594043996332, 0.000237923540393391, 0.00046732650331544, 0.000474448907808135, 0.000478287273678457, 0.000480194683464109, 0.000480631753078668, 0.000481760272726273, 0.000947965771207979, 0.000944821699830455, 0.000939631071343889, 0.000937186900570605, 0.000936007346568281, 0.000934756220144141, 0.00132442589501872, 0.00132658367774979, 0.00133334696220742, 0.00133622384928092, 0.0013381577476241, 0.00134005741746304, 0.0991622968751298, 0.100791399440082, 0.101946808417405, 0.102524244727408, 0.102920085260477, 0.103232984259916, 0.0305219507186844, 0.0304635269233494, 0.0304161055015213, 0.0303742106794513, 0.0303381888169022, 0.0302996157711171, 1.94268588634518e-05, 2.23991225564447e-05, 2.5756135487907e-05, 2.79997917298194e-05, 3.00753967077715e-05, 3.16270817369878e-05, 0.544701146678523, 0.542887331601984, 0.541632986366816, 0.541005610554556, 0.540617004208336, 0.540315690692195, 0.000453386694666078, 0.000448473414508756, 0.00044692043197248, 0.000444826296854332, 0.000445747996014684, 0.000444764303682453, 0.000127569551159321, 0.000128422491392669, 0.00012933662856487, 0.000129941842982939, 0.000129578971489026, 0.000131113075233758, 0.00684610571790029, 0.00686349387897349, 0.00687468164010565, 0.00687880720347743, 0.00688275579317197, 0.00687822247621936), .Dim = c(6L, 12L)) out2 structure(c(0.0189965816735366, 0.0189995096225103, 0.0190099362589894, 0.0190033523148514, 0.01900896721937, 0.0190099427513381, 0.00192043989797585, 0.00207303208721059, 0.00225931163225165, 0.0024049969048389, 0.00252310364086785, 0.00262940166568126, 0.00195164921633517, 0.00190079923515755, 0.00186139563778548, 0.00184188171395076, 0.00183248544676564, 0.00182492970673969, 1.83038731485927e-05, 1.98252671720347e-05, 2.14794764479231e-05, 2.30713122969332e-05, 2.4484220713564e-05, 2.55958833705284e-05, 0.0428066864455102, 0.0431686808647809, 0.0434411033615353, 0.0435883377765726, 0.0436690169266633, 0.0437340464360965, 0.145288252474567, 0.141488776430307, 0.138204532539654, 0.136281799717717, 0.134864952272761, 0.133738386148036, 0.0711728636959696, 0.072031388688795, 0.0727536853228245, 0.0731581966147734, 0.0734424337399303, 0.0736637270702609, 0.000605277151497094, 0.000617268349064968, 0.000632975679951382, 0.000643904422677427, 0.000653775268094148, 0.000662225067910141, 0.26735354610469, 0.267515415990146, 0.26753155165617, 0.267553498616325, 0.267532284594615, 0.267510330320289, 0.000334158771646756, 0.000319032383145857, 0.000306074699839994, 0.000299153278494114, 0.000293956197852583, 0.000290171804454218, 0.000645975219899115, 0.000637548672578787, 0.000632375486965757, 0.000629579821884212, 0.000624956458229123, 0.000622456283217054, 0.0645188290106884, 0.0651539609630352, 0.0656417364889907, 0.0658996698322889, 0.0660715073023965, 0.0662034341510152), .Dim = c(6L, 12L)) Melt data: group variable value 1 1 PLDKRT 0 2 1 PLDKRT 0 3 1 PLDKRT 0 4 1 PLDKRT 0 5 1 PLDKRT 0 6 1 PLDKRT 0 Code: #Data_source 1 sensitivity_1=rbind(sensitivity_out1,sensitivity_out2) sensitivity_1=data.frame(sensitivity_1) colnames(sensitivity_1)=main_l #variable names sensitivity_1$group=1 #Data_source 2 sensitivity_2=rbind(sensitivity_out1[3:4,],sensitivity_out2[3:4,]) sensitivity_2=data.frame(sensitivity_2) colnames(sensitivity_2)=main_l sensitivity_2$group=2 sensitivity_pool=rbind(sensitivity_1,sensitivity_2) sensitivity_pool_m=melt(sensitivity_pool,id.vars="group") ggplot(data = sensitivity_pool_m, aes(x = variable, y = value)) + geom_boxplot(aes( fill= group), width = 0.8) Error: "Error in unit(tic_pos.c, "mm") : 'x' and 'units' must have length > 0" Update Figure out the error. I should use geom_boxplot(aes( fill= factor(group)), width = 0.8) rather than fill= group

    Read the article

  • What arguments to use to explain why SQL Server is far better then a flat file

    - by jamone
    The higher ups in my company were told by good friends that flat files are the way to go, and we should switch from SQL Server to them for everything we do. We have over 300 servers and hundreds of different databases. From just the few I'm involved with we have 10 billion records in quite a few of them with upwards of 100k new records a day and who knows how many updates... Me and a couple others need to come up with a response saying why we shouldn't do this. Most of our stuff is ASP.NET with some legacy ASP. We thought that making a simple console app that tests/times the same interactions between a flat file (stored on the network) and SQL over the network doing large inserts, searches, updates etc along with things like network disconnects randomly. This would show them how bad flat files can be especially when you are dealing with millions of records. What things should I use in my response? What should I do with my demo code to illustrate this? My sort list so far: Security Concurrent access Performance with large amounts of data Amount of time to do such a massive rewrite/switch Lack of transactions PITA to map relational data to flat files NTFS doesn't support tons of files in a directory well I fear that this will be a great post on the Daily WTF someday if I can't stop it now.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >