Search Results

Search found 25149 results on 1006 pages for 'test automation'.

Page 257/1006 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Nginx conditional not evaluating correctly

    - by cjc
    I'm running into a weird problem with nginx and how it evaluates conditionals. Here's the relevant configuration: set $cors FALSE; if ($http_origin ~* (http://example.com|http://dev.example.com:8000|http://dev2.example.com)) { set $cors TRUE; } if ($request_method = 'OPTIONS') { set $cors $cors$request_method; } if ($cors = 'TRUE') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Max-Age' '1728000'; } if ($cors = 'TRUEOPTIONS') { add_header 'Access-Test' "$cors"; add_header 'Access-Control-Allow-Origin' "$http_origin"; add_header 'Access-Control-Allow-Methods' 'POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'X-Requested-With, X-Prototype-Version'; add_header 'Access-Control-Max-Age' '1728000'; add_header 'Content-Type' 'text/plain'; } So, the conditional blocks never trigger. When I remove the conditions, I see that the "Access-Test" header and the "Access-Control-Allow-Origin" set correctly, but, as noted, enabling the conditionals causes the headers not to be sent. I'm testing by running: curl -Iv -i --request "OPTIONS" -H "Origin: http://example.com" http://staging.example.com/ Am I missing something obvious? I've tried the "if" with and without quotes, etc. This is nginx 1.2.9.

    Read the article

  • Java constructor using generic types

    - by user37903
    I'm having a hard time wrapping my head around Java generic types. Here's a simple piece of code that in my mind should work, but I'm obviously doing something wrong. Eclipse reports this error in BreweryList.java: The method initBreweryFromObject() is undefined for the type <T> The idea is to fill a Vector with instances of objects that are a subclass of the Brewery class, so the invocation would be something like: BreweryList breweryList = new BreweryList(BrewerySubClass.class, list); BreweryList.java package com.beerme.test; import java.util.Vector; public class BreweryList<T extends Brewery> extends Vector<T> { public BreweryList(Class<T> c, Object[] j) { super(); for (int i = 0; i < j.length; i++) { T item = c.newInstance(); // initBreweryFromObject() is an instance method // of Brewery, of which <T> is a subclass (right?) c.initBreweryFromObject(); // "The method initBreweryFromObject() is undefined // for the type <T>" } } } Brewery.java package com.beerme.test; public class Brewery { public Brewery() { super(); } protected void breweryMethod() { } } BrewerySubClass.java package com.beerme.test; public class BrewerySubClass extends Brewery { public BrewerySubClass() { super(); } public void androidMethod() { } } I'm sure this is a complete-generics-noob question, but I'm stuck. Thanks for any tips!

    Read the article

  • Puppet: is it ok to "force" certname when you expect to shuffle nodes around?

    - by Luke404
    We all know (good example on SF) that Puppet hostname detection could be... fun. At our company (and I guess we're not alone at this) we usually pre-configure servers at our offices and test them before bringing the gear to a remote datacenter and rack them. Of course the reverse dns will change when doing that, even if we don't change the actual hostname of the system. We're slowly drafting our puppet setup and I'd like to be sure those moves won't create problems. My idea is to explicitly configure the desired full FQDN of the system as certname in puppet.conf at server provision time (before the very first puppet run). My process would look something like this: basic o.s. installation basic network configuration, enough to reach the internet and resolve dns install puppet and set up certname start puppet and let him manage the whole configuration test, fix problems in config (via puppet), re-test, and so on... manually stop puppet set up new network configuration for the datacenter network move the machine to DC turn it on puppet should automatically start and keep on doing its job The process is supported by detecting the environment in puppet's manifests (eg. based on subnet, like they do at Wikimedia) and modify configuration as needed (eg. resolv.conf contents appropriate for each network). Each node's certname will never change for the whole system life cycle. Is there any problem with this approach? Could it be improved?

    Read the article

  • Network card shuts down when stressed

    - by user142485
    I have a network card that functions fine with light use, but quits functioning after heavy use. I replaced it with a brand new one and still have the same issue, also updated drivers. It is a wired D-Link card. The Internet seems fine for a small amount of web browsing but when I run a bandwidth test it starts out fast and slows quickly until the card completely quits; I have a constant ping of the gateway going while I run this and it starts timing out after a couple seconds into the speed test. The card will stay on and the data light on it still flashes some but I cannot ping the gateway or anything else until the computer is rebooted. When I boot into safe mode I can browse and run the speed test fine with no problems. I am guessing that this is probably some program that is loading in regular mode but not safe mode that is causing the issue? I have very limited software (turned off av and firewall) on the computer but I am thinking that I'll just have to start eliminating start-up programs and see if that helps. This is on Windows 7 if that makes any difference. Anyone had a similar issue or have any other suggestions/ideas for narrowing down this problem?

    Read the article

  • IIS 6.0 https not working "connection was reset"

    - by cad
    Application Server Windows Server 2003 SP2 with IIS 6.0 IIS has a "Default Web Site" (port 18000, ssl 443, ID=1) with a certificate created by me. I have an specific site called "scj.galaxy.Weekly" (port 80, ssl 443, ID=1272369728) that is working fine. I have an entry in windows/system32/drivers/etc/hosts that links galaxy.Weekly.scjdev.ds to the server ip in both my local machine and in the application Server. These sites works: http://scj.galaxy.weekly/test.html works http://scj.galaxy.weekly/test.aspx works But https://scj.galaxy.weekly/test.html fails Error message is: The connection was reset The connection to the server was reset while the page was loading. The certificate was working fine for months. It was created with something similar to this: Selfssl /N:CN=*.scjdev.ds /V:3650 /S:1 /P:443 I have tried several options and none of them are working: 1) Create a certificate only in "Default Web Site" and link it to SecureBindings with command prompt cscript adsutil.vbs set /w3svc/1272369728/SecureBindings ":443:galaxy.Weekly.scjdev.ds" 2) Create a certificate only in "Galaxy Site" and link it to SecureBindings 3) Create a certificate in both and link them to secureBindings. Probably I am missing an step or something, but I can't see it. Here is the relevant config of Galaxy Site: <IIsWebServer Location ="/LM/W3SVC/1272369729" AuthFlags="0" LogPluginClsid="{FF160663-DE82-11CF-BC0A-00AA006111E0}" SSLCertHash="c36a514a0be90fbc121d9c19bb052842289d5aee" SSLStoreName="MY" SecureBindings=":443:galaxy.Weekly.scjdev.ds" ServerAutoStart="TRUE" ServerBindings=":80:galaxy.Weekly.scjdev.ds" ServerComment="galaxy.Weekly.scjdev.ds" > </IIsWebServer> <IIsWebVirtualDir Location ="/LM/W3SVC/1272369729/root" AccessFlags="AccessRead | AccessScript" AppFriendlyName="Default Application" AppIsolated="2" AppRoot="/LM/W3SVC/1272369729/Root" AuthFlags="AuthAnonymous | AuthNTLM" DefaultDoc="Default.aspx" DirBrowseFlags="EnableDirBrowsing | DirBrowseShowDate | DirBrowseShowTime | DirBrowseShowSize | DirBrowseShowExtension | DirBrowseShowLongDate" Path="D:\Webs\Galaxysite" ScriptMaps="some config... " > </IIsWebVirtualDir>

    Read the article

  • Strange issue in header location redirect

    - by hd01
    I have three websites hosted (example1.com, example2.com, example3.com) on a server. There is a page (test.php) on example1.com with just code below inside it: <?php header('Location:http://example2.com/a.php'); ?> When I browse test.php it goes to http://example1.com/a.php . it doesn't understand it is another domain url, it tried to find the page on itself. but when I put http://google.com instead of example2.com/a.php it works correct. I really get confused. What is the problem ? Should I set some configuration on the server? ( I am administrator of the hosting server ). Ps. The server is behind a pound server. Here's the Firebug Net output for example1.com/test.php Response Headers: HTTP/1.1 302 Found Date: Tue, 09 Oct 2012 09:03:34 GMT Server: Apache/2.2.16 (Debian) Location: http://example1.com/a.php Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 21 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=utf-8 Request Headers: Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Encoding gzip, deflate Accept-Language en-us,en;q=0.5 Connection keep-alive Cookie mycookie Host example1.com User-Agent Mozilla/5.0 (X11; Linux i686; rv:14.0) Gecko/20100101 Firefox/14.0.1

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • Windows 7 permissions and Samba domain

    - by Nimzo
    I'm the main desktop support in an office of mixed MacOS and WinXp machines, about 60. I'm new to Windows 7. Currently our XP users are admins on their own machines, and my boss is wanting us to get away from that now that we're going to Windows 7 (64bit). My boss is largely absent from my day-to-day work, so I'm here looking for help =) I have numerous unattended .cmd scripts that we run from a server share, unattended software installs. Some run at login, some have to be run manually. With the NetworkAdmin account logged on to the computer, I am able to run the .cmd files and install stuff just fine. With my test account logged on (Power User), I cannot run the .cmd file - I get an Access Denied. When I change my test account to an Admin on the machine, I still get access denied. However, the test account can simply double-click the .exe and install the software just fine, as admin. Power User can't install anything. How do I fix it so that any power user or admin on the machine can run anything as long as it's on our shared software drive?

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • Troubleshooting DTCPing Errors

    - by JimmyP
    So I am running DTC ping between 2 machines on our network and am getting the following error ++++++++++++++++++++++++++++++++++++++++++++++ DTCping 1.9 Report for WEB2 ++++++++++++++++++++++++++++++++++++++++++++++ RPC server is ready ++++++++++++Validating Remote Computer Name++++++++++++ 03-03, 13:39:45.099-->Start DTC connection test Name Resolution: internal-->10.20.3.236-->internal.something 03-03, 13:39:45.114-->Start RPC test (WEB2-->internal) Problem:fail to invoke remote RPC method Error(0x6BA) at dtcping.cpp @303 -->RPC pinging exception -->1722(The RPC server is unavailable.) RPC test failed I have also run RPC ping where I get what I beleive is the same error: C:\Program Files\Windows Resource Kits\Tools>rpcping -s internal Exception 1722 (0x000006BA) Number of records is: 4 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1722 Detection location is 323 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 1237 Detection location is 313 Flags is 0 NumberOfParameters is 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 311 Flags is 0 NumberOfParameters is 3 Long val: 135 Pointer val: 0 Pointer val: 0 ProcessID is 5876 System Time is: 3/3/2011 2:44:12:822 Generating component is 8 Status is 10060 Detection location is 318 Flags is 0 NumberOfParameters is 0 I'm pretty sure that the exception number 1722 is the key but I can't find any info about it. There may be a firewall with ports that need opening between the machines which I am checking with our sys admins now. But I can do a regular ping between the machines. Other than that I am reading a lot of articles talking about OS services and components I know nothing about and am having trouble finding any info on. Can anyone shed any light on this? FYI the machine is running Windows Server 2003 RS SP2.

    Read the article

  • Notepad++ incorrect syntax highlithing?

    - by user360919
    So I want to build a XHTML 1.0 Strict based website. Using Notepad++ for syntax highlighting came as an idea to me. But when I tried to put the XML declaration (as stated in the spec, proper XHTML pages should use a XML declaration and be served as application/xhtml+xml) I can't get the entire document highlighted propperly. Here is the code I used for a basic page: <?xml version="1.0" encoding="UTF-8" standalone="no" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-us" lang="en-us"> <head> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <title>Page</title> <script type="application/javascript"> alert("A perfectly valid xHTML page..."); </script> <style type="text/css"> #test { text-align: center; } </style> </head> <body> <h1 id="test">TEST</h1> </body> </html> Paste this in Notepad++ and you'll see that it won't highlight the code between <script type="application/javascript"> and </script> (it renders its background white) if language is set to XML. If I set the language to HTML, then the script gets correctly highlighted but the XML declaration is not. What to do? How to make a hybrid language - combination of XML and HTML?

    Read the article

  • Can I determine a machine's outward facing IP with PHP without relying on external services?

    - by editor
    I'm working with an API that requires the machine's external IP. As far as I know, the PHP environment I'm using can only get our internal IP. The option on the table is using an external service such as whatismyip.com to tell us: wget -q -O - http://whatismyip.com/automation/n09230945.asp My concern is what happens if that fails. Is there a bulletproof way of determining a machine's IP without relying on external services?

    Read the article

  • PHP mail() function stopped working on Windows 2008 R2 IIS 7.5. Why?

    - by Karl
    PHP 5.3.13 and as noted IIS 7.5. PHP mail() was working fine until I did 3 things (at the same time). (a) added memory to the server taking it from 4gb to 5gb; (b) ran Windows Update and applied all available updates; (c) removed SQL server installation. Windows 2008 R2 SMTP server still works fine. I know this because I can drop a file in the pickup folder and the mail is delivered. This PHP test script: <?php $to='my_name@another_domain.com'; $subject='Test email using PHP'; $message='This is a test email message'. "\r\n"; $headers='From:[email protected]' . "\r\n" . 'Reply-To:[email protected]' . "\r\n" . 'X-Mailer: PHP/' . phpversion(); mail($to, $subject, $message, $headers, '[email protected]'); ?> creates this entry in the PHP log file: mail() on [C:\www\pgs.com\store\admin\test_php_mail.php:1]: To: my_name@another_domain.com -- Headers: From:[email protected] Reply-To:[email protected] X-Mailer: PHP/5.3.13 PHP's mail.log. When using PHP now, I never see a file dropping on the IIS pickup folder. And on other thing, when using previouly working features on the site (such as password recovery), there is no entry made in the mail.log. (The mail log has just been setup to help solve this problem.) How do I fix this? Or at least how do I diagnose the problem? Thanks.

    Read the article

  • SSL and regular VHost on the same server [duplicate]

    - by Pascal Boutin
    This question already has an answer here: How to stop HTTPS requests for non-ssl-enabled virtual hosts from going to the first ssl-enabled virtualhost (Apache-SNI) 1 answer I have a server running Apache 2.4 on which run several virtual hosts. The problem I noticed is that if I try to access let's say https://example.com that have no SSL setuped, apache will automatically try to access the first VHost that has SSL activated (which is litteraly not the same site). How can we prevent this strange behaviour, or in other words, how to say to Apache to ignore SSL for a given site. Here's sample of what my .conf files look like : <VirtualHost foobar.com:80> DocumentRoot /somepath/foobar.com <Directory /somepath/foobar.com> Options -Indexes Require all granted DirectoryIndex index.php AllowOverride All </Directory> ServerName foobar.com ServerAlias www.foobar.com </VirtualHost> <VirtualHost test.example.com:443> DocumentRoot /somepath/ <Directory /somepath/> Options -Indexes Require all granted AllowOverride All </Directory> ServerName test.example.com SSLEngine on SSLCertificateFile [­...] SSLCertificateKeyFile [­...] SSLCertificateChainFile [­...] </VirtualHost> With this, if I try to access https://foobar.com chrome will show me a SSL error that mention that the server was identifying itself as test.example.org Thanks in advance !

    Read the article

  • schroot build environment setup how to avoid bind-mount home

    - by minghua
    The recent linux distributions such as Fedora and Ubuntu all use chroot environment to make the build. Because when making the build often it needs to install some special tools, and to install to the existing system. Using chroot avoids making any changes to the host system. To set up such a build environment, the first step is to make a chroot. I'm following the setup guide at https://wiki.debian.org/Schroot [wheezy-test] description=Contains the SPICE program aliases=test type=directory directory=/srv/chroot/test users=jsmith root-groups=root script-config=desktop/config personality=linux preserve-environment=true In the host on my setup the /home is on /dev/mapper. When schroot is entered, the same home is bind-mounted. Is there a way to avoid this? I prefer to use a different /home inside chroot. When changing the type from directory to plain, the binding is not performed. However that also loses /proc, /sys, etc. You'd have to manually bind-mount them. That does not seem to be a good solution. If a simple configuration change is unavailable, any idea where the script is for type=directory? Probably I'll manually modify the script. Thanks in advance for any answers or hints!

    Read the article

  • What does a Software Developer actually do?

    - by chobo2
    Hi I am graduating from my Computer Science degree in a few weeks from now!! I started to look for my first job. For the last couple years I gotten really into web programming(Asp.net). My first choice would be to get a junior asp.net MVC developer but I don't any companies in my area use MVC yet or if they do they are not hiring. So my second choice would be a junior asp.net Webforms developer. My other choices after that would be forms applications, mobile applications using .Net and C#. As you can see I am looking for something with .Net. I spent the last couple years doing .Net projects for school, on my free time and love the Language and it would pain me right now to switch to something like php. So now I found a posting in my area for an Entry Software Developer. I like the fact that they are using .net and that it is entry job(I never worked in this industry and never had more then like a tutoring job so I want to for like intermediate jobs). Posting Are you looking for an exciting challenge within a dynamic, people-oriented culture where you can launch your technical career? Company Name Inc. is a technology consulting company, located in Canada, that designs, develops, and delivers real-time interactive applications accessed via the Internet as well as back-end tools to support these applications. Company Name provides a combination of out-of-the-box and customized solutions to an expanding list of partners and customers. POSITION SUMMARY As a member of our team, the successful candidate will be responsible for helping us increase the quality and stability of our software systems by working jointly and directly with both the Software Development teams and the QA Team. The primary mission of this role will be to substantially enhance our test automation suite. The incumbent will design and program automated tests (unit, integration, system, stress and load) in Visual Studio using C# and will develop sound processes that help us identify and resolve defects as early as possible. The successful incumbent will help us improve and enhance system functionality, reliability, performance and scalability. This role is specifically designed for an eager, bright, new graduate who is looking for a stepping stone into a software engineering role. We promote from within and invite new graduates to apply for this important position - which may lead to new opportunities. We also offer a generous professional development plan to help you on your way. You will be a key part of a team of experts that is responsible for improving the quality of our software by: • Designing, writing, and executing test plans and programmatic tests in Visual Studio using C# and NUnit for functional testing of our code, new features, regression, and performance test procedures. • Working with the engineers to design and build the stress and load testing framework which emulates tens and even hundreds of thousands of concurrent users via a distributed network interfacing with our Load Testing Lab. • Interfacing with both the Development Team and the QA Team to ensure risks are identified and managed. • Mentoring and leading the QA Team in programmatic test automation technologies and tools. MUST HAVE SKILLS / QUALIFICATIONS: • Diploma or higher Degree in Computer Science, or equivalent formal training. • Fundamental C# programming skills. • Knowledge of Internet technologies and Microsoft Windows platforms. • Knowledge of PC hardware. • Excellent communication skills (both oral and written). • Self-starter who takes initiative, requires minimal supervision, can handle multiple simultaneous tasks. • Detail-oriented, able to concentrate, and work quickly. • Proven diagnostic, analytical, and problem solving skills. NICE TO HAVE SKILLS: • Exposure to Visual Studio Team System or Visual Studio Test Edition. • Exposure in C# using NUnit. • Exposure to NUnit, HTTPUnit, and other automation tool suites. • Exposure to Performance/Stress/Load Testing. • Good understanding of relational databases (MS SQL Server). • Familiar with video and online multi-player games. As part of our team you will have the opportunity to work with a supportive team of experts, drive your own success, and ride the wave as we continually expand our team of experts. If you are interested in this opportunity, please send your resume to [email protected] with “Entry Level Software Developer” in the subject line. So that is the posting. To me it sounds like it is QA job. I don't have anything against QA jobs but alot of them seems to be your just clicking buttons and running scripts. Is this what a typical software developer does? Like I am so on the fence to apply for this job. On one side I am not sure how much programming I would be doing. Like I want to be at least half the time programming otherwise my skills will never improve since I will never be programming in teams and stuff. At the same time I have no experience in the industry so on the other side I am thinking just go for it and then maybe a year later try to get a full programming job(provided that I got the job). Yet if I am not programming in that job then that experience will not help me for the next job I find as I will be back a square one.

    Read the article

  • Why does calling IEnumerable<string>.Count() create an additional assembly dependency ?

    - by Gishu
    Assume this chain of dll references Tests.dll >> Automation.dll >> White.Core.dll with the following line of code in Tests.dll, where everything builds result.MissingPaths Now when I change this to result.MissingPaths.Count() I get the following build error for Tests.dll "White.UIItem is not defined in an assembly that is not referenced. You must add a reference to White.Core.dll." And I don't want to do that because it breaks my layering. Here is the type definition for result, which is in Automation.dll public class HasResult { public HasResult(IEnumerable<string> missingPaths ) { MissingPaths = missingPaths; } public IEnumerable<string> MissingPaths { get; set; } public bool AllExist { get { return !MissingPaths.Any(); } } } Down the call chain the input param to this ctor is created via (The TreeNode class is in White.Core.dll) assetPaths.Where(assetPath => !FindTreeNodeUsingCache(treeHandle, assetPath)); Why does this dependency leak when calling Count() on IEnumerable ? I then suspected that lazy evaluation was causing this (for some reason) - so I slotted in an ToArray() in the above line but didn't work. Update 2011 01 07: Curiouser and Curiouser! it won't build until I add a White.Core reference. So I add a reference and build it (in order to find the elusive dependency source). Open it up in Reflector and the only references listed are Automation, mscorlib, System.core and NUnit. So the compiler threw away the White reference as it was not needed. ILDASM also confirms that there is no White AssemblyRef entry. Any ideas on how to get to the bottom of this thing (primarily for 'now I wanna know why' reasons)? What are the chances that this is an VS2010/MSBuild bug? Update 2011 01 07 #2 As per Shimmy's suggestion, tried calling the method explcitly as an extension method Enumerable.Count(result.MissingPaths) and it stops cribbing (not sure why). However I moved some code around after that and now I'm getting the same issue at a different location using IEnumerable - this time reading and filtering lines out of a file on disk (totally unrelated to White). Seems like it's a 'symptom-fix'. var lines = File.ReadLines(aFilePath).ToArray(); once again, if I remove the ToArray() it compiles again - it seems that any method that causes the enumerable to be evaluated (ToArray, Count, ToList, etc.) causes this. Let me try and get a working tiny-app to demo this issue... Update 2011 01 07 #3 Phew! More information.. It turns out the problem is just in one source file - this file is LINQ-phobic. Any call to an Enumerable extension method has to be explicitly called out. The refactorings that I did caused a new method to be moved into this source file, which had some LINQ :) Still no clue as to why this class dislikes LINQ. using System; using System.Collections.Generic; using System.IO; using System.Linq; using G.S.OurAutomation.Constants; using G.S.OurAutomation.Framework; using NUnit.Framework; namespace G.S.AcceptanceTests { public abstract class ConfigureThingBase : OurTestFixture { .... private static IEnumerable<string> GetExpectedThingsFor(string param) { // even this won't compile - although it compiles fine in an adjoining source file in the same assembly //IEnumerable<string> s = new string[0]; //Console.WriteLine(s.Count()); // this is the line that is now causing a build failure // var expectedInfo = File.ReadLines(someCsvFilePath)) // .Where(line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)) // .Select(line => line.Replace("%PLACEHOLDER%", param)) // .ToArray(); // Unrolling the LINQ above removes the build error var expectedInfo = Enumerable.ToArray( Enumerable.Select( Enumerable.Where( File.ReadLines(someCsvFilePath)), line => !line.StartsWith("REM", StringComparison.InvariantCultureIgnoreCase)), line => line.Replace("%PLACEHOLDER%", param)));

    Read the article

  • How to Mock HttpWebRequest and HttpWebResponse using the Moq Framework?

    - by Nicholas
    How do you use the Moq Framework to Mock HttpWebRequest and HttpWebResponse in the following Unit Test? [Test] public void Verify_That_SiteMap_Urls_Are_Reachable() { // Arrange - simplified Uri uri = new Uri("http://www.google.com"); // Act HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri); HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Asset Assert.AreEqual("OK", response.StatusCode.ToString()); }

    Read the article

  • Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0

    - by alexm
    I switched from q4e Helios to Indigo m2e plugin and my Maven 2 project no longer works. I had a ROO-generated Spring MVC project. This is what I get: Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0:test-compile (execution: default, phase: process-test-sources) Plugin execution not covered by lifecycle configuration: org.codehaus.mojo:aspectj-maven-plugin:1.0:compile (execution: default, phase: process-sources) Any insight is greatly appreciated. Thank you.

    Read the article

  • Android: Trusting all Certificates using HttpClient over HTTPS

    - by psuguitarplayer
    Hi all, Recently posted a question regarding the HttpClient over Https (found here). I've made some headway, but I've run into new issues. As with my last problem, I can't seem to find an example anywhere that works for me. Basically, I want my client to accept any certificate (because I'm only ever pointing to one server) but I keep getting a javax.net.ssl.SSLException: Not trusted server certificate exception. So this is what I have: public void connect() throws A_WHOLE_BUNCH_OF_EXCEPTIONS { HttpPost post = new HttpPost(new URI(PROD_URL)); post.setEntity(new StringEntity(BODY)); KeyStore trusted = KeyStore.getInstance("BKS"); trusted.load(null, "".toCharArray()); SSLSocketFactory sslf = new SSLSocketFactory(trusted); sslf.setHostnameVerifier(SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register(new Scheme ("https", sslf, 443)); SingleClientConnManager cm = new SingleClientConnManager(post.getParams(), schemeRegistry); HttpClient client = new DefaultHttpClient(cm, post.getParams()); HttpResponse result = client.execute(post); } And here's the error I'm getting: W/System.err( 901): javax.net.ssl.SSLException: Not trusted server certificate W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:360) W/System.err( 901): at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:92) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:321) W/System.err( 901): at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:129) W/System.err( 901): at org.apache.http.impl.conn.AbstractPoolEntry.open(AbstractPoolEntry.java:164) W/System.err( 901): at org.apache.http.impl.conn.AbstractPooledConnAdapter.open(AbstractPooledConnAdapter.java:119) W/System.err( 901): at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:348) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:555) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:487) W/System.err( 901): at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:465) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:129) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.access$0(MainActivity.java:77) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity$2.run(MainActivity.java:49) W/System.err( 901): Caused by: java.security.cert.CertificateException: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.checkServerTrusted(TrustManagerImpl.java:157) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.OpenSSLSocketImpl.startHandshake(OpenSSLSocketImpl.java:355) W/System.err( 901): ... 12 more W/System.err( 901): Caused by: java.security.InvalidAlgorithmParameterException: the trust anchors set is empty W/System.err( 901): at java.security.cert.PKIXParameters.checkTrustAnchors(PKIXParameters.java:645) W/System.err( 901): at java.security.cert.PKIXParameters.<init>(PKIXParameters.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerImpl.<init>(TrustManagerImpl.java:89) W/System.err( 901): at org.apache.harmony.xnet.provider.jsse.TrustManagerFactoryImpl.engineGetTrustManagers(TrustManagerFactoryImpl.java:134) W/System.err( 901): at javax.net.ssl.TrustManagerFactory.getTrustManagers(TrustManagerFactory.java:226) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.createTrustManagers(SSLSocketFactory.java:263) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:190) W/System.err( 901): at org.apache.http.conn.ssl.SSLSocketFactory.<init>(SSLSocketFactory.java:216) W/System.err( 901): at me.harrisonlee.test.ssl.MainActivity.connect(MainActivity.java:107) W/System.err( 901): ... 2 more

    Read the article

  • TFS Manual Mstest Publish Results?

    - by ScSub
    Following a MSDN web page, I am trying to manually run mstest within my tfsbuild.proj and put the results into the pass/fail logic so the build will fail if this particular test fails. It's kind of like running a FxCop or something else from CMD and capturing a "0" or "1" and force-fail the build. MSTest /testcontainer:test.dll /publish:http://ourtfsmachine:8080 /teamproject:ProjectName /publishbuild:BuildNumber01 /platform:AnyCpu /flavor:Release I could understand running this inside an Exec task, butI don't know what the BuildNumber is, for example. Help?

    Read the article

  • IntegrationTests - A potentially dangerous Request.Path value was detected from the client

    - by stacker
    I get this error: A potentially dangerous Request.Path value was detected from the client (?). when this URI: http://www.site.com/%3f. How can I write a integration test for this type of errors. I want to test against all this erros: A potentially dangerous Request.Path value was detected from the client A potentially dangerous Request.Cookies value was detected from the client A potentially dangerous Request.Form value was detected from the client A potentially dangerous Request.QueryString value was detected from the client

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >