Search Results

Search found 25149 results on 1006 pages for 'test automation'.

Page 12/1006 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • How Do I Migrate 100 DBs From One MS-SQL 2008 Server To Another? (looking for automation)

    - by jc4rp3nt3r
    Let me start by saying that I am not a DBA, but I am in a position where I am responsible for moving just under 100 MS-SQL 2008 DBs from our current development server, to a new/better/faster development server. As this is just a local dev server, temporary downtime is acceptable, but I am looking for a way to move all of the databases (preferably in bulk). I know that I could take a bak of each, and restore it on the new server, but given the volume of DBs, I am looking for a more efficient way. I am not opposed to learning a new piece of software, writing code or any other requirement, so long as it speeds up the process.

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • Weird Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days into the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How to test controllers with CodeIgniter PART 2?

    - by Jeff
    I am having difficulties testing Controllers in Codeigniter: I use Toast but when I invoke my Home Controller class I get an exception that "db" is not defined. Has anybody an idea how to test this 1-1? Thanks class Home_tests extends Toast { function __construct() { parent::__construct(__FILE__); // Load any models, libraries etc. you need here } function test_select_user() { $controller = new Home(); $controller->getDbUser('[email protected]','password'); assert($query->num_rows() == 0 ); } }

    Read the article

  • create an english test

    - by thienthai
    hi everyone, i want to create an english test software using window form and C# something like bellow: Hi David, Thanks for your e-mail. I hope things get easier for you before the weekend. You’ve been (be) really busy this week! 1 _______ (you / make) your vacation plans yet? Last May, I 2 _________(go) to Japan with my family again. We 3 _______ (be) there three times now! But this time, we 4 _______ (not stay) with my aunt in Tokyo. Instead, we 5 ______(drive) around to different places. Then in July, my friend Angie and I 6 ________ (travel) to Peru. 7 _______ (you / ever / be) there? It’s one of the most interesting places I 8 _______ (ever / visit). The ruins of Machu Picchu are amazing. Write soon! Mariko how can i display it in window form that we can fill the brackets (____) directly everyone help me thanks.

    Read the article

  • Black box test cases for insertion procedure

    - by AJ
    insertion_procedure (int a[], int p [], int N) { int i,j,k; for (i=0; i<=N; i++) p[i] = i; for (i=2; i<=N; i++) { k = p[i]; j = 1; while (a[p[j-1]] > a[k]) {p[j] = p[j-1]; j--} p[j] = k; } } What would be few good test cases for this particular insertion procedure?

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • Test descriptions/name, say what the test is? or what it means when it fails?

    - by xenoterracide
    The API docs for Test::More::ok is ok($got eq $expected, $test_name); right now in one of my apps I have $test_name print what the test is testing. So for example in one of my tests I have set this to 'filename exists'. What I realized after I got a bug report recently, and realized that the only time I ever see this message is when the test is failing, if the test is failing that means the file doesn't exist. In your opinion, do you think these $test_name's should say what the test means if successful? what it means if it failed? or do you think it should say something else? please explain why?

    Read the article

  • test coverage reality

    - by iPhoneDeveloper
    I am NOT doing test driven development and I write my test classes after the actual code is written. In my current project I have a test coverage of(Line coverage) %70 for 3000 lines of Java code.(Using JUnit, Mockito and Sonar for testing) But while I feel actually I am not covering and catching %70 of the problems that can occur. So my question is in theory is that possible to have a %100 Line coverage but in reality it is meaningless because of low quality of the test code and maybe a %40 well written test code is much better than a bad %100 coverage? or we can always say line coverage more or less gives the percentage of all covered issues?

    Read the article

  • Can you recommend a good test plan template?

    - by Ethel Evans
    Can you recommend a good test plan template for an agile testing team? I know there are templates for testing on the web and have already looked at some found by search engines, but I could really use something lightweight and something that has already been tried by skilled testers and is known to work well. Many templates I've seen give me the feeling that writing test documents is expected to be a third of the work that those testers are doing, but my team really prefers to use less documentation and more actual test writing. We use a wiki for documentation, so an approach that lends itself to living documents would be great. My hope is that using a more structured approach to test planning will increase the usefulness of my test plan while reducing the effort to create it by allowing me to think about the tests, and not the format and structure of the plan. My workplace does not have something already on hand, so whatever I start doing might be adopted by the company.

    Read the article

  • UI Testing Tool on linux

    - by Hulk
    Looking for a tool to UI testing on Linux .Platform used for development is django. The idea is that the analysts will capture the tests thru some UI and it will be able to be played back over and over again.

    Read the article

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

  • Mock the window.setTimeout in a Jasmine test to avoid waiting

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/21/mock-the-window.settimeout-in-a-jasmine-test-to-avoid-waiting.aspxJasmine has a clock mocking feature, but I was unable to make it work in a function that I’m calling and want to test. The example only shows using clock for a setTimeout in the spec tests and I couldn’t find a good example. Here is my current and slightly limited approach.   If we have a method we want to test: var test = function(){ var self = this; self.timeoutWasCalled = false; self.testWithTimeout = function(){ window.setTimeout(function(){ self.timeoutWasCalled = true; }, 6000); }; }; Here’s my testing code: var realWindowSetTimeout = window.setTimeout; describe('test a method that uses setTimeout', function(){ var testObject; beforeEach(function () { // force setTimeout to be called right away, no matter what time they specify jasmine.getGlobal().setTimeout = function (funcToCall, millis) { funcToCall(); }; testObject = new test(); }); afterEach(function() { jasmine.getGlobal().setTimeout = realWindowSetTimeout; }); it('should call the method right away', function(){ testObject.testWithTimeout(); expect(testObject.timeoutWasCalled).toBeTruthy(); }); }); I got a good pointer from Andreas in this StackOverflow question. This would also work for window.setInterval. Other possible approaches: create a wrapper module of setTimeout and setInterval methods that can be mocked. This can be mocked with RequireJS or passed into the constructor. pass the window.setTimeout function into the method (this could get messy)

    Read the article

  • Templated function with two type parameters fails compile when used with an error-checking macro

    - by SirPentor
    Because someone in our group hates exceptions (let's not discuss that here), we tend to use error-checking macros in our C++ projects. I have encountered an odd compilation failure when using a templated function with two type parameters. There are a few errors (below), but I think the root cause is a warning: warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' Probably best explained in code: #include "stdafx.h" template<class A, class B> bool DoubleTemplated(B & value) { return true; } template<class A> bool SingleTemplated(A & value) { return true; } bool NotTemplated(bool & value) { return true; } #define BOOL_CHECK_BOOL_RETURN(expr) \ do \ { \ bool __b = (expr); \ if (!__b) \ { \ return false; \ } \ } while (false) \ bool call() { bool thing = true; // BOOL_CHECK_BOOL_RETURN(DoubleTemplated<int, bool>(thing)); // Above line doesn't compile. BOOL_CHECK_BOOL_RETURN((DoubleTemplated<int, bool>(thing))); // Above line compiles just fine. bool temp = DoubleTemplated<int, bool>(thing); // Above line compiles just fine. BOOL_CHECK_BOOL_RETURN(SingleTemplated<bool>(thing)); BOOL_CHECK_BOOL_RETURN(NotTemplated(thing)); return true; } int _tmain(int argc, _TCHAR* argv[]) { call(); return 0; } Here are the errors, when the offending line is not commented out: 1>------ Build started: Project: test, Configuration: Debug Win32 ------ 1>Compiling... 1>test.cpp 1>c:\junk\temp\test\test\test.cpp(38) : warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ',' before ')' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(41) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(48) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(49) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(52) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(54) : error C2065: 'argv' : undeclared identifier 1>c:\junk\temp\test\test\test.cpp(54) : error C2059: syntax error : ']' 1>c:\junk\temp\test\test\test.cpp(55) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(58) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : fatal error C1004: unexpected end-of-file found 1>Build log was saved at "file://c:\junk\temp\test\test\Debug\BuildLog.htm" 1>test - 12 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas? Thanks!

    Read the article

  • Welcome Relief

    - by michael.seback
    Government organizations are experiencing unprecedented demand for social services. The current economy continues to put immense stress on social service organizations. Increased need for food assistance, employment security, housing aid and other critical services is keeping agencies busier than ever. ... The Kansas Department of Labor (KDOL) uses Oracle's social services solution in its employment security program. KDOL has used Siebel Customer Relationship Management (CRM) for nearly a decade, and recently purchased Oracle Policy Automation to improve its services even further. KDOL implemented Siebel CRM in 2002, and has expanded its use of it over the years. The agency started with Siebel CRM in the call center and later moved it into case management. Siebel CRM has been a strong foundation for KDOL in the face of rising demand for unemployment benefits, numerous labor-related law changes, and an evolving IT environment. ... The result has been better service for constituents. "It's really enabled our staff to be more effective in serving clients," said Hubka. That's a trend the department plans to continue. "We're 100 percent down the path of Siebel, in terms of what we're doing in the future," Hubka added. "Their vision is very much in line with what we're planning on doing ourselves." ... Community Services is the leading agency responsible for the safety and well-being of children and young people within Australia's New South Wales (NSW) Government. Already a longtime Oracle Case Management user, Community Services recently implemented Oracle Policy Automation to ensure accurate, consistent decisions in the management of child safety. "Oracle Policy Automation has helped to provide a vehicle for the consistent application of the Government's 'Keep Them Safe' child protection action plan," said Kerry Holling, CIO for Community Services. "We believe this approach is a world-first in the structured decisionmaking space for child protection and we believe our department is setting an example that other child protection agencies will replicate." ... Read the full case study here.

    Read the article

  • HP Server Automation - agent misreporting hostname

    - by warren
    I've been using HP Server Automation for some time, but have noticed an interesting issue I'm hoping the SF community has seen / knows a workaround to. When the management agent on Solaris or RHEL (only platforms I've noticed it on) reports the hostname of the managed server, it does not return the value of hostname, it returns the first alias to that entry in /etc/hosts. Any ideas on how to get around that? Other than editing /etc/hosts so the alias is at the end of the line instead of the front?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >