Search Results

Search found 23949 results on 958 pages for 'test test'.

Page 9/958 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Programming test for ASP.NET C# developer job - Opinions please!

    - by Indy
    Hi all, We are hiring a .NET C# developer and I have developed a technical test for the candidates to complete. They have an hour and it has two parts, some knowledge based questions covering asp.net, C# and SQL and a small practical test. I'd appreciate feedback on the test, is it sufficient to test the programmers ability? What would you change if anything? Part One. What the are events fired as part of the ASP.NET Page lifecycle. What interesting things can you do at each? How does ViewState work and why is it either useful or bad? What is a common way to create web services in ASP.NET 2.0? What is the GAC? What is boxing? What is a delegate? The C# keyword .int. maps to which .NET type? Explain the difference between a Stored Procedure and a Trigger? What is an OUTER Join? What is @@IDENTITY? Part Two: You are provided with the Northwind Database and the attached DB relationship diagram. Please create a page which provides users with the following functionality. You don’t need to be too concerned with the presentation detail of the page. Select a customer from a list, and see all the orders placed by that customer. For the same customer, find all their orders which are Beverages and the quantity is more than 5. I was aware of setting the right balance of difficulty on this as there is an hour's test. I was able to complete the practical test in under 30 mins using SQLDatasource and the query designer in visual studio and the test questions, I am looking to see how they approach it logically and whether they use the tools available. Many thanks!

    Read the article

  • App Engine Hangout - chat with an App Engine Software Engineer in Test

    App Engine Hangout - chat with an App Engine Software Engineer in Test We'll be chatting with Robert Schuppenies, who is an App Engine Software Engineer in Test. He'll describe a bit about what he does, and talk about/demo some App Engine test frameworks, like the testbed module, code.google.com and code.google.com From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Failed to spawn test

    - by Lost
    Running a simple test in Ubuntu 12.04: sudo lxc-execute -n test /bin/bash -l debug -o outout Got error message: lxc-execute: failed to spawn 'test' cat outout: lxc-execute 1347053658.113 DEBUG lxc_start - sigchild handler set lxc-execute 1347053658.113 INFO lxc_start - 'test' is initialized lxc-execute 1347053658.366 DEBUG lxc_start - Dropping cap_sys_boot and watching utmp lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/' (rootfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/sys' (sysfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/proc' (proc) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev' (devtmpfs) lxc-execute 1347053658.366 DEBUG lxc_cgroup - checking '/dev/pts' (devpts) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/' (ext3) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/fs/fuse/connections' (fusectl) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/debug' (debugfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/sys/kernel/security' (securityfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/lock' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/shm' (tmpfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/run/rpc_pipefs' (rpc_pipefs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/scratch/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/share' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/proj/WAMC-Simulation' (nfs) lxc-execute 1347053658.367 DEBUG lxc_cgroup - checking '/users/bhu' (nfs) lxc-execute 1347053658.367 ERROR lxc_start - failed to spawn 'test' Run command: sudo lxc-checkconfig Kernel config /proc/config.gz not found, looking in other places... Found kernel config file /boot/config-2.6.38.7-1.0emulab --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup namespace: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig What's the problem? Thanks a lot

    Read the article

  • How to drastically improve code coverage?

    - by Peter Kofler
    I'm tasked with getting a legacy application under unit test. First some background about the application: It's a 600k LOC Java RCP code base with these major problems massive code duplication no encapsulation, most private data is accessible from outside, some of the business data also made singletons so it's not just changeable from outside but also from everywhere. no business model, business data is stored in Object[] and double[][], so no OO. There is a good regression test suite and an efficient QA team is testing and finding bugs. I know the techniques how to get it under test from classic books, e.g. Michael Feathers, but that's too slow. As there is a working regression test system I'm not afraid to aggressively refactor the system to allow unit tests to be written. How should I start to attack the problem to get some coverage quickly, so I'm able to show progress to management (and in fact to start earning from safety net of JUnit tests)? I do not want to employ tools to generate regression test suites, e.g. AgitarOne, because these tests do not test if something is correct.

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • TDD: Write a separate test for object initialization or relying on other tests exercising it

    - by DXM
    This seems to be the common pattern that's emerging in some of the tests I've worked on lately. We have a class, and quite often this is legacy code whose design can't be easily altered, which has a bunch of member variables. There's some kind of "Initialize" or "Load" function which would put an object into a valid state. Only after it is initialized/loaded, are the members in the proper state so that other methods can be exercised. So when we start writing tests, first test is "TestLoad" and all we put in there is exercising initialization logic. Then we might add one (or few) TestLoadFailureXXX tests and those are definitely valuable. Then we start writing tests to verify other behaviors but all of them require the object to be loaded. So they all start by running exactly the same code as "TestLoad". So my question: Is TestLoad even necessary? Do you take it and let other tests simply exercise the loading? Or leave it so things are more explicit? I know that each unit test function should have no (or as little as possible) overlap with other test functions, but it seems like in cases of loading, this is unavoidable. And whether we like it or not, if something in the loading code breaks, we will end up with a whole test suite of failures. Is there another approach that I might be missing here? Thank you for the responses. It definitely makes sense that you want to see "InitializationTest" and if that fails you know where to start looking. In case it matters, this question is mostly about C++ and we use CppUnit framework. And now, thanks to sleske, I'll be constantly wishing that CppUnit supported test dependencies. Might have to hack something in one of these days :)

    Read the article

  • Test-Driven Development with plain C: manage multiple modules

    - by Angelo
    I am new to test-driven development, but I'm loving it. There is, however, a main problem that prevents me from using it effectively. I work for embedded medical applications, plain C, with safety issues. Suppose you have module A that has a function A_function() that I want to test. This function call a function B_function, implemented in module B. I want to decouple the module so, as James Grenning teaches, I create a Mock module B that implements a mock version of B_function. However the day comes when I have to implement module B with the real version of B_function. Of course the two B_function can not live in the same executable, so I don't know how to have a unique "launcher" to test both modules. James Grenning way out is to replace, in module A, the call to B_function with a function pointer that can have the value of the mock or the real function according to the need. However I work in a team, and I can not justify this decision that would make no sense if it were not for the test, and no one asked me explicitly to use test-driven approach. Maybe the only way out is to generate different a executable for each module. Any smarter solution? Thank you

    Read the article

  • ORACLE UK TECHNOLOGY “TEST FEST”

    - by mseika
    ORACLE UK TECHNOLOGY “TEST FEST” Join us at the UKOUG Conference at the ICC in Birmingham and Take your OPN Implementation Specialist Exam for Free! 3-5 December 2012, ICC Birmingham (UK) Dear Oracle Partner,** As a priority partner, we are sending you advance notice of these exclusive “Technology Test Fest” free examination sessions. Please note that this communication will be sent out to the wider community one week from today, so please register immediately to secure your place! ** We are delighted to offer you the exclusive opportunity to register and attend the Oracle UK “Technology Test Fest” being held as Part of the UKOUG Conference at the ICC in Birmingham in the Drawing Room at the Hyatt Regency hotel adjacent to the ICC venue, from 3rd to 5th December 2012.This is your opportunity to sit your chosen Oracle Technology Specialist Implementation Exam free of charge on this day. Four sessions are being run (10.00AM and 14.00PM), with just 15 places at each session – so register now to avoid disappointment! (Exams take about 1.5 hours to complete.) REGISTER - 3 December Afternoon Session - 2:00pm REGISTER - 4 December Morning Session 10:00am REGISTER - 4 December Afternoon Session - 2:00pm REGISTER - 5 December Morning Session - 10:00am Price: FREE Address:The Drawing Room Hyatt Regency Hotel Birmingham 2 Bridge Street Birmingham BI 2JZ 3 - 5 December 2012 Which Implementation Specialist Exams are available to take?Click here to see the list of exams available for you to sit for free at the Oracle UKOUG “Technology Test Fest”. The links also include the study guide for the particular exam. Please review the Specialization Guide as well. How do I register for the Oracle UK “Technology Test Fest”? Fill out the Pearson Vue profile HERE and complete it with your OPN Company ID. NB: Instructions on how to create/update the profile can be found HERE. Register for one of the 4 sessions using the registration links at the top of this page You will need to bring your own laptop with 'Windows OS' and a form of identification to be able to take any of the exams. Need Help or Advice?For more information about the tests and Get Specialized programme, please contact: [email protected] issues with your profile or any other OPN-related problems, please contact our Oracle Partner Business Centre: [email protected] or call 08705 194 194. We look forward to welcoming you to the Oracle UK “Technology Test Fest” on the 3rd- 5thDecember 2012! Book early to avoid disappointment.

    Read the article

  • Test Driven Development Code Order

    - by Bobby Kostadinov
    I am developing my first project using test driven development. I am using Zend Framework and PHPUnit. Currently my project is at 100% code coverage but I am not sure I understand in what order I am supposed to write my code. Am I supposed to write my test FIRST with what my objects are expected to do or write my objects and then test them? Ive been working on completing a controller/model and then writing at test for it but I am not sure this is what TDD is about? Any advice? For example, I wrote my Auth plugin and my Auth controller and tested that they work properly in my browser, and then I sat down to write the tests for them, which proved that there were some logical errors in the code that did work in the browser.

    Read the article

  • If you should only have one assertion per test; how to test multiple inputs?

    - by speg
    I'm trying to build up some test cases, and have read that you should try and limit the number of assertions per test case. So my question is, what is the best way to go about testing a function w/ multiple inputs. For example, I have a function that parses a string from the user and returns the number of minutes. The string can be in the form "5w6h2d1m", where w, h, d, m correspond to the number of weeks, hours, days, and minutes. If I wanted to follow the '1 assertion per test rule' I'd have to make multiple tests for each variation of input? That seems silly so instead I just have something like: self.assertEqual(parse_date('5m'), 5) self.assertEqual(parse_date('5h'), 300) self.assertEqual(parse_date('5d') ,7200) self.assertEqual(parse_date('1d4h20m'), 1700) In the one test case. Is there a better way?

    Read the article

  • Unittest test case only touches the file name

    - by Chen OT
    I was told that unittest is fast and the tests which touches DB, across network, and touches FileSystem are not unittest. In one of my testcases, its input are the file names (amount about 300~400) under a specific folder. Although these input are part of file system, the execution time of this test is very fast. Should I moved this test, which is fast but touches file system, to higher level test?

    Read the article

  • Audio is working, but the speaker test doesn't work

    - by Pacquo
    I've installed Ubuntu 12.04 LTS from a minimal cd on a netbook (Asus 1001 PXD). I've installed the ubuntu-desktop package using the --no-install-recommends option. Everything works fine, except the "sound test" for headphones or analog speakers. Clicking on the "test" buttons (front left and front right) I don't hear any sound. Despite this, the audio is working properly. I've checked the audio levels with alsamixer; I have also checked that the test sounds actually exist in /usr/share/sounds/alsa. I tried an installation of Ubuntu 12.04 LTS made with the desktop-cd, and in this case the speaker test works properly. I suppose, therefore, that the problem could depend on the lack of a package, but I have not identified which one.

    Read the article

  • Interpreting and using the Asterisk "timing test" command

    - by zigg
    Timing is very important for certain kinds of applications in Asterisk. If DAHDI is the timing source, the dahdi_test command can be used to check the timing provided by the DAHDI kernel module. If dahdi_test returns exclusively measurements above 99.975%, the DAHDI timing source is generally considered good. Since Asterisk 1.6, new timing sources have become available, such as pthread and timerfd. The accuracy of these timing sources seems to be measurable with the Asterisk CLI timing test command: localhost*CLI> timing test Attempting to test a timer with 50 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 50 timer ticks My concern is that timing 50 ticks seems to be a considerably less stressful test than dahdi_test's 8192 samples in 8000 ms, particularly since just about every system I've tried it on, virtual or otherwise, can handle it. I can ask timing test to ramp it up to what I think are dahdi_test's standards: localhost*CLI> timing test 1024 Attempting to test a timer with 1024 ticks per second. Using the 'timerfd' timing module for this test. It has been 1000 milliseconds, and we got 1024 timer ticks This will indeed break down a bit depending on the system I'm using, usually with a decrease in timer ticks. But I'm not sure whether this is useful to stress it to this level. Is there authoritative guidance on using and interpreting the timing test command to insure that a given Asterisk system has a timing source that will work well?

    Read the article

  • How to test nginx proxy timeouts

    - by mkorszun
    Target: I would like to test all Nginx proxy timeout parameters in very simple scenario. My first approach was to create really simple HTTP server and put some timeouts: Between listen and accept to test proxy_connect_timeout Between accept and read to test proxy_send_timeout Between read and send to test proxy_read_timeout Test: 1) Server code (python): import socket import os import time import threading def http_resp(conn): conn.send("HTTP/1.1 200 OK\r\n") conn.send("Content-Length: 0\r\n") conn.send("Content-Type: text/xml\r\n\r\n\r\n") def do(conn, addr): print 'Connected by', addr print 'Sleeping before reading data...' time.sleep(0) # Set to test proxy_send_timeout data = conn.recv(1024) print 'Sleeping before sending data...' time.sleep(0) # Set to test proxy_read_timeout http_resp(conn) print 'End of data stream, closing connection' conn.close() def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('', int(os.environ['PORT']))) s.listen(1) print 'Sleeping before accept...' time.sleep(130) # Set to test proxy_connect_timeout while 1: conn, addr = s.accept() t = threading.Thread(target=do, args=(conn, addr)) t.start() if __name__ == "__main__": main() 2) Nginx configuration: I have extended Nginx default configuration by setting explicitly proxy_connect_timeout and adding proxy_pass pointing to my local HTTP server: location / { proxy_pass http://localhost:8888; proxy_connect_timeout 200; } 3) Observation: proxy_connect_timeout - Even though setting it to 200s and sleeping only 130s between listen and accept Nginx returns 504 after ~60s which might be because of the default proxy_read_timeout value. I do not understand how proxy_read_timeout could affect connection at so early stage (before accept). I would expect 200 here. Please explain! proxy_send_timeout - I am not sure if my approach to test proxy_send_timeout is correct - i think i still do not understand this parameter correctly. After all, delay between accept and read does not force proxy_send_timeout. proxy_read_timeout - it seems to be pretty straightforward. Setting delay between read and write does the job. So I guess my assumptions are wrong and probably I do not understand proxy_connect and proxy_send timeouts properly. Can some explain them to me using above test if possible (or modifying if required).

    Read the article

  • How do i assert my exception message with JUNIT Test annotation ?

    - by Cshah
    i have written a few junits with @Test annotation. If my test method throws a checked exception and if i want to assert the message along with the exception, is there a way to do so with JUNIT @Test annotation.AFAIK, Junit 4.7 doesnt provide this feature but does any future versions provide it. I know in .NET you can assert the message and the exception class. Looking for similar feature in the java world. This is what i want @Test (expected = RuntimeException.class, message = "Employee ID is null") public void shouldThrowRuntimeExceptionWhenEmployeeIDisNull() { }

    Read the article

  • Junit run not picking file src/test/resources. For file required by some dependency jar

    - by saddy-dj
    Hi, I m facing a issue where test/resource is not picked,but instead jar's main/resource is picked Scenario is like : Myproject src/test/resources--- have config.xml w which should be needed by abc.jar which is a dependecy in Myproject. When running test case for Myproject its loading config.xml of abc.jar instead of Myproject test/resources. - I need to know order in which maven pick resources. - Or wat im trying is not possible. Thanks.

    Read the article

  • Can't add client machine to windows server 2008 domain controller

    - by Patrick J Collins
    A bit of background before I dive into the gritty details: I have a single server running Windows 2003 Server where I host my ASP.net website and SQL Server + Reports. I've been creating ordinary windows user accounts to authenticate my users, and I enabled integrated windows authentication with impersonation. I've set up a bunch of user groups which correspond to certain roles (admin, power user, normal user, etc) and I test membership to enable or disable certain features. Overall, I'm pretty happy with the solution, it was quick to setup and I don't have to worry about messing around storing passwords and whatnot. Well, what I'm trying to do now is set up a new environment with 3 servers (Web, SQL, Reports) and I'd like these three servers to share common user accounts. I understand that I could add these three machines to a domain, which means installing Active Directory on one of the machines. I am barking up the wrong tree here? Would you suggest an alternative configuration? Assuming that I stick with AD, I have a couple of questions regarding DNS. To be honest, I'd rather not fiddle around with the DNS settings because my ISP already has their own DNS server which works just fine. It would appear however that DNS and AD are intertwined. Firstly, if I am to create a new domain in called mycompany.net, do I actually need to be the registered owner of that domain name and ensure the DNS entry points to the IP address of the machine hosting AD? Secondly, for the two other machines that I am trying to add to the domain, do I need to fiddle with their DNS settings? I've tried setting the preferred DNS Server IP address to that of my newly installed AD, but no luck. At this point, I can't add the two other machines to the domain. Here are some diagnostics that I have run based on a few suggestions I read on forums (sorry they're in French, although I could translate if needed). I ran nltest, which seems to indicate that the client can discover the domain controller. When I run dcdiag, the call to DsGetDcName fails with error 1722, not really sure what that means. Any suggestions? Thanks! C:\Users\Administrator>nltest /dsgetdc:mycompany.net Contrôleur de domaine : \\REPORTS.mycompany.net Adresse : \\111.111.111.111 GUID dom : 3333a4ec-ca56-4f02-bb9e-76c29c6c3832 Nom dom : mycompany.net Nom de la forêt : mycompany.net Nom de site du contrôleur de domaine : Default-First-Site-Name Nom de notre site : Default-First-Site-Name Indicateurs : PDC GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS _FOREST CLOSE_SITE FULL_SECRET La commande a été correctement exécutée C:\Users\Administrator>dcdiag /s:mycompany.net /u: mycompany.net \pcollins /p:somepass Diagnostic du serveur d'annuaire Exécution de l'installation initiale : * Forêt AD identifiée. Collecte des informations initiales terminée. Exécution des tests initiaux nécessaires Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Connectivity ......................... Le test Connectivity de REPORTS a réussi Exécution des tests principaux Test du serveur : Default-First-Site-Name\REPORTS Démarrage du test : Advertising Erreur irrécupérable : l'appel DsGetDcName (REPORTS) a échoué ; erreur 1722 Le localisateur n'a pas pu trouver le serveur. ......................... Le test Advertising de REPORTS a échoué Démarrage du test : FrsEvent Impossible d'interroger le journal des événements File Replication Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test FrsEvent de REPORTS a échoué Démarrage du test : DFSREvent Impossible d'interroger le journal des événements DFS Replication sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test DFSREvent de REPORTS a échoué Démarrage du test : SysVolCheck [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test SysVolCheck de REPORTS a échoué Démarrage du test : KccEvent Impossible d'interroger le journal des événements Directory Service sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test KccEvent de REPORTS a échoué Démarrage du test : KnowsOfRoleHolders ......................... Le test KnowsOfRoleHolders de REPORTS a réussi Démarrage du test : MachineAccount Impossible d'ouvrir le canal avec [REPORTS] : échec avec l'erreur 53 : Le chemin réseau n'a pas été trouvé. Impossible d'obtenir le nom de domaine NetBIOS Échec : impossible de tester le nom principal de service (SPN) HOST Échec : impossible de tester le nom principal de service (SPN) HOST ......................... Le test MachineAccount de REPORTS a réussi Démarrage du test : NCSecDesc ......................... Le test NCSecDesc de REPORTS a réussi Démarrage du test : NetLogons [REPORTS] Une opération net use ou LsaPolicy a échoué avec l'erreur 53, Le chemin réseau n'a pas été trouvé.. ......................... Le test NetLogons de REPORTS a échoué Démarrage du test : ObjectsReplicated ......................... Le test ObjectsReplicated de REPORTS a réussi Démarrage du test : Replications ......................... Le test Replications de REPORTS a réussi Démarrage du test : RidManager ......................... Le test RidManager de REPORTS a réussi Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.mycompany.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué Démarrage du test : SystemLog Impossible d'interroger le journal des événements System sur le serveur REPORTS.mycompany.net. Erreur 0x6ba « Le serveur RPC n'est pas disponible. » ......................... Le test SystemLog de REPORTS a échoué Démarrage du test : VerifyReferences ......................... Le test VerifyReferences de REPORTS a réussi Exécution de tests de partitions sur ForestDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de ForestDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de ForestDnsZones a réussi Exécution de tests de partitions sur DomainDnsZones Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de DomainDnsZones a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de DomainDnsZones a réussi Exécution de tests de partitions sur Schema Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Schema a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Schema a réussi Exécution de tests de partitions sur Configuration Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de Configuration a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de Configuration a réussi Exécution de tests de partitions sur mycompany Démarrage du test : CheckSDRefDom ......................... Le test CheckSDRefDom de mycompany a réussi Démarrage du test : CrossRefValidation ......................... Le test CrossRefValidation de mycompany a réussi Exécution de tests d'entreprise sur mycompany.net Démarrage du test : LocatorCheck Avertissement : l'appel DcGetDcName(GC_SERVER_REQUIRED) a échoué ; erreur 1722 Serveur de catalogue global introuvable - Les catalogues globaux ne fonctionnent pas. Avertissement : l'appel DcGetDcName(PDC_REQUIRED) a échoué ; erreur 1722 Contrôleur principal de domaine introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(TIME_SERVER) a échoué ; erreur 1722 Serveur de temps introuvable. Le serveur contenant le rôle PDC ne fonctionne pas. Avertissement : l'appel DcGetDcName(GOOD_TIME_SERVER_PREFERRED) a échoué ; erreur 1722 Serveur de temps introuvable. Avertissement : l'appel DcGetDcName(KDC_REQUIRED) a échoué ; erreur 1722 Centre de distribution de clés introuvable : les centres de distribution de clés ne fonctionnent pas. ......................... Le test LocatorCheck de mycompany.net a échoué Démarrage du test : Intersite ......................... Le test Intersite de mycompany.net a réussi Update 1 : I am under the distinct impression that the problem is caused by some security settings. I have read elsewhere that the client needs to be able to access the fileshare sysvol. I had to enable Client for Microsoft Windows and File and Printer Sharing which were previously disabled. When I now run dcdiag the Advertising test works, which I suppose is forward progress. It currently chokes on the Services step (unable to open remote IPC). Démarrage du test : Services Impossible d'ouvrir IPC distant à [REPORTS.locbus.net] : erreur 0x35 « Le chemin réseau n'a pas été trouvé. » ......................... Le test Services de REPORTS a échoué The original English version of that error message : Could not open Remote ipc to [server] Update 2 : I attach some more diagnostics : Netsetup.log (client): 09/24/2009 13:27:09:773 ----------------------------------------------------------------- 09/24/2009 13:27:09:773 NetpValidateName: checking to see if 'WEB' is valid as type 1 name 09/24/2009 13:27:12:773 NetpCheckNetBiosNameNotInUse for 'WEB' [MACHINE] returned 0x0 09/24/2009 13:27:12:773 NetpValidateName: name 'WEB' is valid for type 1 09/24/2009 13:27:12:805 ----------------------------------------------------------------- 09/24/2009 13:27:12:805 NetpValidateName: checking to see if 'WEB' is valid as type 5 name 09/24/2009 13:27:12:805 NetpValidateName: name 'WEB' is valid for type 5 09/24/2009 13:27:12:852 ----------------------------------------------------------------- 09/24/2009 13:27:12:852 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:12:992 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:12:992 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:320 ----------------------------------------------------------------- 09/24/2009 13:27:21:320 NetpDoDomainJoin 09/24/2009 13:27:21:320 NetpMachineValidToJoin: 'WEB' 09/24/2009 13:27:21:320 OS Version: 6.0 09/24/2009 13:27:21:320 Build number: 6002 09/24/2009 13:27:21:320 ServicePack: Service Pack 2 09/24/2009 13:27:21:414 SKU: Windows Server® 2008 Standard 09/24/2009 13:27:21:414 NetpDomainJoinLicensingCheck: ulLicenseValue=1, Status: 0x0 09/24/2009 13:27:21:414 NetpGetLsaPrimaryDomain: status: 0x0 09/24/2009 13:27:21:414 NetpMachineValidToJoin: status: 0x0 09/24/2009 13:27:21:414 NetpJoinDomain 09/24/2009 13:27:21:414 Machine: WEB 09/24/2009 13:27:21:414 Domain: MYCOMPANY.NET 09/24/2009 13:27:21:414 MachineAccountOU: (NULL) 09/24/2009 13:27:21:414 Account: MYCOMPANY.NET\pcollins 09/24/2009 13:27:21:414 Options: 0x25 09/24/2009 13:27:21:414 NetpLoadParameters: loading registry parameters... 09/24/2009 13:27:21:414 NetpLoadParameters: DNSNameResolutionRequired not found, defaulting to '1' 0x2 09/24/2009 13:27:21:414 NetpLoadParameters: status: 0x2 09/24/2009 13:27:21:414 NetpValidateName: checking to see if 'MYCOMPANY.NET' is valid as type 3 name 09/24/2009 13:27:21:523 NetpCheckDomainNameIsValid [ Exists ] for 'MYCOMPANY.NET' returned 0x0 09/24/2009 13:27:21:523 NetpValidateName: name 'MYCOMPANY.NET' is valid for type 3 09/24/2009 13:27:21:523 NetpDsGetDcName: trying to find DC in domain 'MYCOMPANY.NET', flags: 0x40001010 09/24/2009 13:27:22:039 NetpDsGetDcName: failed to find a DC having account 'WEB$': 0x525, last error is 0x79 09/24/2009 13:27:22:039 NetpDsGetDcName: status of verifying DNS A record name resolution for 'KING.MYCOMPANY.NET': 0x0 09/24/2009 13:27:22:039 NetpDsGetDcName: found DC '\\KING.MYCOMPANY.NET' in the specified domain 09/24/2009 13:27:30:039 NetUseAdd to \\KING.MYCOMPANY.NET\IPC$ returned 53 09/24/2009 13:27:30:039 NetpJoinDomain: status of connecting to dc '\\KING.MYCOMPANY.NET': 0x35 09/24/2009 13:27:30:039 NetpDoDomainJoin: status: 0x35 09/24/2009 13:27:30:148 ----------------------------------------------------------------- ipconfig /all (on client): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : WEB Suffixe DNS principal . . . . . . : Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-17-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.122(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : **.***.163.123 NetBIOS sur Tcpip. . . . . . . . . . . : Activé ipconfig /all (on server): Configuration IP de Windows Nom de l'hôte . . . . . . . . . . : KING Suffixe DNS principal . . . . . . : mycompany.net Type de noeud. . . . . . . . . . : Hybride Routage IP activé . . . . . . . . : Non Proxy WINS activé . . . . . . . . : Non Liste de recherche du suffixe DNS.: locbus.net Carte Ethernet Connexion au réseau local : Suffixe DNS propre à la connexion. . . : Description. . . . . . . . . . . . . . : Intel 21140-Based PCI Fast Ethernet Adapter (Emulated) Adresse physique . . . . . . . . . . . : **-15-5D-A1-1E-** DHCP activé. . . . . . . . . . . . . . : Non Configuration automatique activée. . . : Oui Adresse IPv4. . . . . . . . . . . : **.***.163.123(préféré) Masque de sous-réseau. . . . . . . . . : 255.255.255.0 Passerelle par défaut. . . . . . . . . : **.***.163.2 Serveurs DNS. . . . . . . . . . . . . : 127.0.0.1 NetBIOS sur Tcpip. . . . . . . . . . . : Activé nslookup (on client): Serveur : *******.***.com Address: **.***.163.123 Nom : mycompany.net Addresses: ****:****:a37b::****:a37b **.****.163.123

    Read the article

  • How has test first development changed the way you write software?

    - by Toran Billups
    I've started to find that I can't write software without writing a test first. I ask this subjective question because I want to hear what others in the community think about the reasons I can't go back to writing production code without a test first. If you can't write a test for something you don't understand it Without a regression test you can't clean the code You are going to test it anyway, spend the time to do it right Evolutionary design is possible without fear You actually write less code yourself Fast feedback cycles save time and money Job security (less bugs makes your boss happy) It actually makes my work more enjoyable

    Read the article

  • In TDD, should tests be written by the person who implemented the feature under test?

    - by martin
    We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • JUnit confusion: use 'extend Testcase' or '@Test' ?

    - by Rabarberski
    I've found the proper use (or at least the documentation) of JUnit very confusing. This question serves both as a future reference and as a real question. If I've understood correctly, there are two main approaches to create and run a JUnit test: Approach A: create a class that extends TestCase, and start test methods with the word test. When running the class as a JUnit Test (in Eclipse), all methods starting with the word test are automatically run. import junit.framework.TestCase; public class DummyTestA extends TestCase { public void testSum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Approach B: create a 'normal' class and prepend a @Test annotation to the method. Note that you do NOT have to start the method with the word test. import org.junit.*; import static org.junit.Assert.*; public class DummyTestB { @Test public void Sum() { int a = 5; int b = 10; int result = a + b; assertEquals(15, result); } } Mixing the two seems not to be a good idea, see e.g. this stackoverflow question: Now, my questions(s): What is the preferred approach, or when would you use one instead of the other? Approach B allows for testing for exceptions by extending the @Test annotation like in @Test(expected = ArithmeticException.class). But how do you test for exceptions when using approach A? When using approach A, you can group a number of test classes in a test suite. TestSuite suite = new TestSuite("All tests");<br/> suite.addTestSuite(DummyTestA.class); suite.addTestSuite(DummyTestAbis.class);` But this can't be used with approach B (since each testclass should subclass TestCase). What is the proper way to group tests for approach B?

    Read the article

  • BizTalk 2009 - BizTalk Benchmark Wizard: Running a Test

    - by StuartBrierley
    The BizTalk Benchmark Wizard is a ultility that can be used to gain some validation of a BizTalk installation, giving a level of guidance on whether it is performing as might be expected.  It should be used after BizTalk Server has been installed and before any solutions are deployed to the environment.  This will ensure that you are getting consistent and clean results from the BizTalk Benchmark Wizard. The BizTalk Benchmark Wizard applies load to the BizTalk Server environment under a choice of specific scenarios. During these scenarios performance counter information is collected and assessed against statistics that are appropriate to the BizTalk Server environment. For details on installing the Benchmark Wizard see my previous post. The BizTalk Benchmarking Wizard provides two simple test scenarios, one for messaging and one for Orchestrations, which can be used to test your BizTalk implementation. Messaging Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The PassThruReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The WCF One-Way Send Port, which is the only subscriber to the message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Orchestrations Loadgen generates a new XML message and sends it over NetTCP A WCF-NetTCP Receive Location receives a the xml document from Loadgen. The XMLReceive pipeline performs no processing and the message is published by the EPM to the MessageBox. The message is delivered to a simple Orchestration which consists of a receive location and a send port The WCF One-Way Send Port, which is the only subscriber to the Orchestration message, retrieves the message from the MessageBox The PassThruTransmit pipeline provides no additional processing The message is delivered to the back end WCF service by the WCF NetTCP adapter Below is a quick outline of how to run the BizTalk Benchmark Wizard on a single server, although it should be noted that this is not ideal as this server is then both generating and processing the load.  In order to separate this load out you should run the "Indigo" service on a seperate server. To start the BizTalk Benchmark Wizard click Start > All Programs > BizTalk Benchmark Wizard > BizTalk Benchmark Wizard. On this screen click next, you will then get the following pop up window. Check the server and database names and check the "check prerequsites" check-box before pressing ok.  The wizard will then check that the appropriate test scenarios are installed. You should then choose the test scenario that wish to run (messaging or orchestration) and the architecture that most closely matches your environment. You will then be asked to confirm the host server for each of the host instances. Next you will be presented with the prepare screen.  You will need to start the indigo service before pressing the Test Indigo Service Button. If you are running the indigo service on a separate server you can enter the server name here.  To start the indigo service click Start > All Programs > BizTalk Benchmark Wizard > Start Indigo Service.   While the test is running you will be presented with two speed dial type displays - one for the received messages per second and one for the processed messages per second. The green dial shows the current rate and the red dial shows the overall average rate.  Optionally you can view the CPU usage of the various servers involved in processing the tests. For my development environment I expected low results and this is what I got.  Although looking at the online high scores table and comparing to the quad core system listed, the results are perhaps not really that bad. At some time I may look at what improvements I can make to this score, but if you are interested in that now take a look at Benchmark your BizTalk Server (Part 3).

    Read the article

  • Optional parens in Ruby for method with uppercase start letter?

    - by RasmusKL
    I just started out using IronRuby (but the behaviour seems consistent when I tested it in plain Ruby) for a DSL in my .NET application - and as part of this I'm defining methods to be called from the DSL via define_method. However, I've run into an issue regarding optional parens when calling methods starting with an uppercase letter. Given the following program: class DemoClass define_method :test do puts "output from test" end define_method :Test do puts "output from Test" end def run puts "Calling 'test'" test() puts "Calling 'test'" test puts "Calling 'Test()'" Test() puts "Calling 'Test'" Test end end demo = DemoClass.new demo.run Running this code in a console (using plain ruby) yields the following output: ruby .\test.rb Calling 'test' output from test Calling 'test' output from test Calling 'Test()' output from Test Calling 'Test' ./test.rb:13:in `run': uninitialized constant DemoClass::Test (NameError) from ./test.rb:19:in `<main>' I realize that the Ruby convention is that constants start with an uppercase letter and that the general naming convention for methods in Ruby is lowercase. But the parens are really killing my DSL syntax at the moment. Is there any way around this issue?

    Read the article

  • IXRepository and test problems

    - by Ridermansb
    Recently had a doubt about how and where to test repository methods. Let the following situation: I have an interface IRepository like this: public interface IRepository<T> where T: class, IEntity { IQueryable<T> Query(Expression<Func<T, bool>> expression); // ... Omitted } And a generic implementation of IRepository public class Repository<T> : IRepository<T> where T : class, IEntity { public IQueryable<T> Query(Expression<Func<T, bool>> expression) { return All().Where(expression).AsQueryable(); } } This is an implementation base that can be used by any repository. It contains the basic implementation of my ORM. Some repositories have specific filters, in which case we will IEmployeeRepository with a specific filter: public interface IEmployeeRepository : IRepository<Employee> { IQueryable<Employee> GetInactiveEmployees(); } And the implementation of IEmployeeRepository: public class EmployeeRepository : Repository<Employee>, IEmployeeRepository // TODO: I have a dependency with ORM at this point in Repository<Employee>. How to solve? How to test the GetInactiveEmployees method { public IQueryable<Employee> GetInactiveEmployees() { return Query(p => p.Status != StatusEmployeeEnum.Active || p.StartDate < DateTime.Now); } } Questions Is right to inherit Repository<Employee>? The goal is to reuse code once all implementing IRepository already been made. If EmployeeRepository inherit only IEmployeeRepository, I have to literally copy and paste the code of Repository<T>. In our example, in EmployeeRepository : Repository<Employee> our Repository lies in our ORM layer. We have a dependency here with our ORM impossible to perform some unit test. How to create a unit test to ensure that the filter GetInactiveEmployees return all Employees in which the Status != Active and StartDate < DateTime.Now. I can not create a Fake/Mock of IEmployeeRepository because I would be testing? Need to test the actual implementation of GetInactiveEmployees. The complete code can be found on Github

    Read the article

  • //TODO: Test this thoroughly!!!!!!

    - by Edward Boyle
    I just ran into an ugly sight in my code: //TODO: Test this thoroughly!!!!!! private void ... I would very much like to go back in time and ask the past me what I meant, why did I add that TODO:? …And then, smack the s%#t out of him. No matter how much testing I do of this code I will always wonder if the past me found something. Was it actually that code or was it a calling method that may bring unwanted results. The fact that I find absolutely nothing wrong with the code makes it that much more haunting. The moral of the story; when you find something wrong and need to test it thoroughly, stay up another hour testing it. The clarity in your head at that moment, on that issue, at that specific moment in time, would take hours worth of commenting to justify not finishing it now. Maybe what I meant was: // TODO: Test this thoroughly!!!!!! // All seems fine but test it just in case, not to worry. private void ... Doubt it. -I’m screwed.

    Read the article

  • Moving from Test Automation to Development

    - by avgvstvs
    I'm in an interesting quandary. I've been doing test automation using QTP for about 1.5 years, and am in the slow process of switching to a developer role in my same company. I also begin my Master's in CS this fall. An old friend is trying to recruit me for a Sr. Test Automation position that could potentially pay me $23k more for the exact same thing I do now. But obviously I would defer moving to development. The new company is much more technical overall (I would be moving from financial services to industrial automation, and they have MANY more software dev roles available. I know traditionally QA type jobs carry an odd "danger" tag, but test automation is really a different beast. Does anyone have any experience moving from test automation to development? Does the QA stigma exist? The extra $$ would be nice, but not at the expense of my career. I should note that my Master's will be on Systems/parallel programming, so one thought is that I'll get automatic consideraton for development upon completing my Master's. I also work 6hrs/wk doing game development with a friend.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >