Search Results

Search found 20099 results on 804 pages for 'virtual host'.

Page 307/804 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Returning a DOM element with Webdriver in Javascript

    - by ehmicky
    How do I return a DOM Element with Webdriver in Javascript? I am using the wd.js Javascript bindings: require("wd") .remote("promiseChain") .init() .get("http://www.google.com") .elementById("mngb") .then(function(element) { console.log(element); }); I am getting this weird object that is not a standard DOM Element (for example I cannot get the HTML code out of it): { value: '0', browser: { domain: null, _events: {}, _maxListeners: 10, configUrl: { protocol: 'http:', slashes: true, auth: null, host: '127.0.0.1:4444', port: '4444', hostname: '127.0.0.1', hash: null, search: '', query: {}, pathname: '/wd/hub', path: '/wd/hub', href: 'http://127.0.0.1:4444/wd/hub' }, sauceRestRoot: 'https://saucelabs.com/rest/v1', noAuthConfigUrl: { protocol: 'http:', slashes: true, host: '127.0.0.1:4444', port: '4444', hostname: '127.0.0.1', hash: null, search: null, query: null, pathname: '/wd/hub', path: '/wd/hub', href: 'http://127.0.0.1:4444/wd/hub' }, defaultCapabilities: { browserName: 'firefox', version: '', javascriptEnabled: true, platform: 'ANY' }, _httpConfig: { timeout: undefined, retries: 3, retryDelay: 15, baseUrl: undefined }, sampleElement: { value: 1, browser: [Circular] }, sessionID: '238c9837-3d82-4d90-9594-cefb4ba8e6b9' } }

    Read the article

  • Deploying a Rails App to Multiple Servers using Capistrano - Best Practices

    - by Louise
    I have a rails application that I need to deploy to 3 servers - machine1.com, machine2.com and machine3.com. I want to be able to deploy it to all machines at once and each machine individually. Can someone help me out with a skeleton Capistrano config file / recipe? Should it all be in deploy.rb or should I break it out in machine1.rb, etc? I thought I was on the right track getting Capistrano to take in command line arguments, but it choked when I tried set the roles within the namespaces. I'd pass in 'hosts=1,2,3' as an argument and set the role:app/web/db to "machine#{host}.com" after splitting on the command and going into an each do |host| {}... Anyway, other than creating 4 different deploy.rb files and renaming it before running cap:deploy each time, I'm stumped. I'd like to be able to do the following: cap deploy:machine1:latest_version_from_svn cap deploy:all_machines:latest:version_from_svn Just don't know if it should all be in deploy.rb split up with namespaces or if it should be broken into multiple deploy*.rb files.

    Read the article

  • compiler warning at C++ template base class

    - by eike
    I get a compiler warning, that I don't understand in that context, when I compile the "Child.cpp" from the following code. (Don't wonder: I stripped off my class declarations to the bare minuum, so the content will not make much sense, but you will see the problem quicker). I get the warning with VS2003 and VS2008 on the highest warning level. The code AbstractClass.h : #include <iostream> template<typename T> class AbstractClass { public: virtual void Cancel(); // { std::cout << "Abstract Cancel" << std::endl; }; virtual void Process() = 0; }; //outside definition. if I comment out this and take the inline //definition like above (currently commented out), I don't get //a compiler warning template<typename T> void AbstractClass<T>::Cancel() { std::cout << "Abstract Cancel" << std::endl; } Child.h : #include "AbstractClass.h" class Child : public AbstractClass<int> { public: virtual void Process(); }; Child.cpp : #include "Child.h" #include <iostream> void Child::Process() { std::cout << "Process" << std::endl; } The warning The class "Child" is derived from "AbstractClass". In "AbstractClass" there's the public method "AbstractClass::Cancel()". If I define the method outside of the class body (like in the code you see), I get the compiler warning... AbstractClass.h(7) : warning C4505: 'AbstractClass::Cancel' : unreferenced local function has been removed with [T=int] ...when I compile "Child.cpp". I do not understand this, because this is a public function and the compiler can't know if I later reference this method or not. And, in the end, I reference this method, because I call it in main.cpp and despite this compiler warning, this method works if I compile and link all files and execute the program: //main.cpp #include <iostream> #include "Child.h" int main() { Child child; child.Cancel(); //works, despite the warning } If I do define the Cancel() function as inline (you see it as out commented code in AbstractClass.h), then I don't get the compiler warning. Of course my program works, but I want to understand this warning or is this just a compiler mistake? Furthermore, if do not implement AbsctractClass as a template class (just for a test purpose in this case) I also don't get the compiler warning...?

    Read the article

  • How do you diagnose a 500 error on Heroku when there is no error message in the logs?

    - by lala
    I have a Rails app on Heroku that is serving 500 errors at random intervals. Web pages will display "Internal server error" in plain text, instead of the usual "We're sorry. Something went wrong." page. When I refresh the page, it works fine. The logs don't show me an error message, just » 14:20:34.107 2013-10-11 12:20:33.763690+00:00 heroku router - - at=info method=HEAD path=/ host=www.mydomain.com fwd="184.73.237.85/ec2-184-73-237-85.compute-1.amazonaws.com" dyno=web.1 connect=1ms service=63ms status=200 bytes=0 » 14:21:03.957 2013-10-11 12:21:03.561867+00:00 heroku router - - at=info method=GET path=/ host=www.mydomain.com fwd="50.112.95.211/ec2-50-112-95-211.us-west-2.compute.amazonaws.com" dyno=web.1 connect=0ms service=1ms status=500 bytes=21 Support has told me to look at request queuing in New Relic, but New Relic only shows a big red mark saying the server is down (even though the site works fine when refreshed). With no error messages, I'm at a loss for how to diagnose this issue.

    Read the article

  • Optimize php-fpm and varnish for a powerfull server

    - by Jim
    My setup is: Intel® Core™ i7-2600 and RAM 16 GB DDR3 RAM varnish+nginx+php-fpm+apc for a not very heavy WordPress blog with W3 Total Cache and CDN My problem is that after 55 hits per second according to blitz.io varnish starts giving out timeouts. CPU usage at this time is hardly 1%. Free memory at all time remains 10GB+. I tried benchmarking php-fpm directly with result of 150hits/s without any timeouts. But after that the CPU usage goes 100% and it stops responding. Can you help me optimize it to handle more? As i understand nginx has nothing to do over here so i dont put its config. php-fpm config listen = /tmp/php5-fpm.sock listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 150 pm.start_servers = 7 pm.min_spare_servers = 2 pm.max_spare_servers = 15 pm.max_requests = 500 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on apc extension = apc.so apc.enabled=1 apc.shm_size=512MB apc.num_files_hint=0 apc.user_entries_hint=0 apc.ttl=7200 apc.use_request_time=1 apc.user_ttl=7200 apc.gc_ttl=3600 apc.cache_by_default=1 apc.filters apc.mmap_file_mask=/tmp/apc.XXXXXX apc.file_update_protection=2 apc.enable_cli=0 apc.max_file_size=1M apc.stat=1 apc.stat_ctime=0 apc.canonicalize=0 apc.write_lock=1 apc.report_autofilter=0 apc.rfc1867=0 apc.rfc1867_prefix =upload_ apc.rfc1867_name=APC_UPLOAD_PROGRESS apc.rfc1867_freq=0 apc.rfc1867_ttl=3600 apc.include_once_override=0 apc.lazy_classes=0 apc.lazy_functions=0 apc.coredump_unmap=0 apc.file_md5=0 apc.preload_path Varnish VCL backend default { .host = "127.0.0.1"; .port = "8080"; .connect_timeout = 6s; .first_byte_timeout = 6s; .between_bytes_timeout = 60s; } acl purgehosts { "localhost"; "127.0.0.1"; } # Called after a document has been successfully retrieved from the backend. sub vcl_fetch { # Uncomment to make the default cache "time to live" is 5 minutes, handy # but it may cache stale pages unless purged. (TODO) # By default Varnish will use the headers sent to it by Apache (the backend server) # to figure out the correct TTL. # WP Super Cache sends a TTL of 3 seconds, set in wp-content/cache/.htaccess set beresp.ttl = 24h; # Strip cookies for static files and set a long cache expiry time. if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset beresp.http.set-cookie; set beresp.ttl = 24h; } # If WordPress cookies found then page is not cacheable if (req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)") { # set beresp.cacheable = false;#versions less than 3 #beresp.ttl>0 is cacheable so 0 will not be cached set beresp.ttl = 0s; } else { #set beresp.cacheable = true; set beresp.ttl=24h;#cache for 24hrs } # Varnish determined the object was not cacheable #if ttl is not > 0 seconds then it is cachebale if (!beresp.ttl > 0s) { # set beresp.http.X-Cacheable = "NO:Not Cacheable"; } else if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { # You don't wish to cache content for logged in users set beresp.http.X-Cacheable = "NO:Got Session"; return(hit_for_pass); #previously just pass but changed in v3+ } else if ( beresp.http.Cache-Control ~ "private") { # You are respecting the Cache-Control=private header from the backend set beresp.http.X-Cacheable = "NO:Cache-Control=private"; return(hit_for_pass); } else if ( beresp.ttl < 1s ) { # You are extending the lifetime of the object artificially set beresp.ttl = 300s; set beresp.grace = 300s; set beresp.http.X-Cacheable = "YES:Forced"; } else { # Varnish determined the object was cacheable set beresp.http.X-Cacheable = "YES"; if (beresp.status == 404 || beresp.status >= 500) { set beresp.ttl = 0s; } # Deliver the content return(deliver); } sub vcl_hash { # Each cached page has to be identified by a key that unlocks it. # Add the browser cookie only if a WordPress cookie found. if ( req.http.Cookie ~"(wp-postpass|wordpress_logged_in|comment_author_)" ) { #set req.hash += req.http.Cookie; hash_data(req.http.Cookie); } } # vcl_recv is called whenever a request is received sub vcl_recv { # remove ?ver=xxxxx strings from urls so css and js files are cached. # Watch out when upgrading WordPress, need to restart Varnish or flush cache. set req.url = regsub(req.url, "\?ver=.*$", ""); # Remove "replytocom" from requests to make caching better. set req.url = regsub(req.url, "\?replytocom=.*$", ""); remove req.http.X-Forwarded-For; set req.http.X-Forwarded-For = client.ip; # Exclude this site because it breaks if cached if ( req.http.host == "sr.ituts.gr" ) { return( pass ); } # Serve objects up to 2 minutes past their expiry if the backend is slow to respond. set req.grace = 120s; # Strip cookies for static files: if (req.url ~ "\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|pdf|txt|tar|wav|bmp|rtf|js|flv|swf|html|htm)$") { unset req.http.Cookie; return(lookup); } # Remove has_js and Google Analytics __* cookies. set req.http.Cookie = regsuball(req.http.Cookie, "(^|;\s*)(__[a-z]+|has_js)=[^;]*", ""); # Remove a ";" prefix, if present. set req.http.Cookie = regsub(req.http.Cookie, "^;\s*", ""); # Remove empty cookies. if (req.http.Cookie ~ "^\s*$") { unset req.http.Cookie; } if (req.request == "PURGE") { if (!client.ip ~ purgehosts) { error 405 "Not allowed."; } #previous version ban() was purge() ban("req.url ~ " + req.url + " && req.http.host == " + req.http.host); error 200 "Purged."; } # Pass anything other than GET and HEAD directly. if (req.request != "GET" && req.request != "HEAD") { return( pass ); } /* We only deal with GET and HEAD by default */ # remove cookies for comments cookie to make caching better. set req.http.cookie = regsub(req.http.cookie, "1231111111111111122222222333333=[^;]+(; )?", ""); # never cache the admin pages, or the server-status page, or your feed? you may want to..i don't if (req.request == "GET" && (req.url ~ "(wp-admin|bb-admin|server-status|feed)")) { return(pipe); } # don't cache authenticated sessions if (req.http.Cookie && req.http.Cookie ~ "(wordpress_|PHPSESSID)") { return(lookup); } # don't cache ajax requests if(req.http.X-Requested-With == "XMLHttpRequest" || req.url ~ "nocache" || req.url ~ "(control.php|wp-comments-post.php|wp-login.php|bb-login.php|bb-reset-password.php|register.php)") { return (pass); } return( lookup ); } Varnish Daemon options DAEMON_OPTS="-a :80 \ -T 127.0.0.1:6082 \ -f /etc/varnish/ituts.vcl \ -u varnish -g varnish \ -S /etc/varnish/secret \ -p thread_pool_add_delay=2 \ -p thread_pools=8 \ -p thread_pool_min=100 \ -p thread_pool_max=1000 \ -p session_linger=50 \ -p session_max=150000 \ -p sess_workspace=262144 \ -s malloc,5G" Im not sure where to start, should i for start optimize php-fpm and then go to varnish or php-fpm is at its max right now so i should start looking for the problem in varnish?

    Read the article

  • FluentNhibernate many-to-many mapping - resolving record is not inserted

    - by Dmitriy Nagirnyak
    Hi, I have a many-to-many mapping defined (only relevant fields included): // MODEL: public class User : IPersistentObject { public User() { Permissions = new HashedSet<Permission>(); } public virtual int Id { get; protected set; } public virtual ISet<Permission> Permissions { get; set; } } public class Permission : IPersistentObject { public Permission() { } public virtual int Id { get; set; } } // MAPPING: public class UserMap : ClassMap<User> { public UserMap() { Id(x => x.Id); HasManyToMany(x => x.Permissions).Cascade.All().AsSet(); } } public class PermissionMap : ClassMap<Permission> { public PermissionMap() { Id(x => x.Id).GeneratedBy.Assigned(); Map(x => x.Description); } } The following test fails as there is no record inserted into User_Permission table: [Test] public void AddingANewUserPrivilegeShouldSaveIt() { var p1 = new Permission { Id = 123, Description = "p1" }; Session.Save(p1); var u = new User { Email = "[email protected]" }; u.Permissions.Add(p1); Session.Save(u); var userId = u.Id; Session.Evict(u); Session.Get<User>(userId).Permissions.Should().Not.Be.Empty(); } The SQL executed is (SQLite): INSERT INTO "Permission" (Description, Id) VALUES (@p0, @p1);@p0 = 'p1', @p1 = 1 INSERT INTO "User" (Email) VALUES (@p0); select last_insert_rowid();@p0 = '[email protected]' SELECT user0_.Id as Id2_0_, user0_.Email as Email2_0_ FROM "User" user0_ WHERE user0_.Id=@p0;@p0 = 1 SELECT permission0_.UserId as UserId1_, permission0_.PermissionId as Permissi2_1_, permission1_.Id as Id4_0_, permission1_.Description as Descript2_4_0_ FROM User_Permissions permission0_ left outer join "Permission" permission1_ on permission0_.PermissionId=permission1_.Id WHERE permission0_.UserId=@p0;@p0 = 1 We can clearly see that there is no record inserted into the User_Permissions table where it should be. Not sure what I am doing wrong and need an advice. So can you please help me to pass this test. Thanks, Dmitriy.

    Read the article

  • Help needed to resolve RMI RemoteException

    - by Gabriel Parenza
    Hello friends, Any idea why do I get RemoteException while trying to invoke methods on Unix machine from Windows. I am inside the network and dont think this is because of firewall problem as I can do "telnet" from Windows to Unix box after starting the RMI server at the unix box. I also could not understand why is it going to local loopback IP? Stack Trace:: RemoteException occured, details java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused: connect java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused: connect Many thanks in advance.

    Read the article

  • C++ type-checking at compile-time

    - by Masterofpsi
    Hi, all. I'm pretty new to C++, and I'm writing a small library (mostly for my own projects) in C++. In the process of designing a type hierarchy, I've run into the problem of defining the assignment operator. I've taken the basic approach that was eventually reached in this article, which is that for every class MyClass in a hierarchy derived from a class Base you define two assignment operators like so: class MyClass: public Base { public: MyClass& operator =(MyClass const& rhs); virtual MyClass& operator =(Base const& rhs); }; // automatically gets defined, so we make it call the virtual function below MyClass& MyClass::operator =(MyClass const& rhs); { return (*this = static_cast<Base const&>(rhs)); } MyClass& MyClass::operator =(Base const& rhs); { assert(typeid(rhs) == typeid(*this)); // assigning to different types is a logical error MyClass const& casted_rhs = dynamic_cast<MyClass const&>(rhs); try { // allocate new variables Base::operator =(rhs); } catch(...) { // delete the allocated variables throw; } // assign to member variables } The part I'm concerned with is the assertion for type equality. Since I'm writing a library, where assertions will presumably be compiled out of the final result, this has led me to go with a scheme that looks more like this: class MyClass: public Base { public: operator =(MyClass const& rhs); // etc virtual inline MyClass& operator =(Base const& rhs) { assert(typeid(rhs) == typeid(*this)); return this->set(static_cast<Base const&>(rhs)); } private: MyClass& set(Base const& rhs); // same basic thing }; But I've been wondering if I could check the types at compile-time. I looked into Boost.TypeTraits, and I came close by doing BOOST_MPL_ASSERT((boost::is_same<BOOST_TYPEOF(*this), BOOST_TYPEOF(rhs)>));, but since rhs is declared as a reference to the parent class and not the derived class, it choked. Now that I think about it, my reasoning seems silly -- I was hoping that since the function was inline, it would be able to check the actual parameters themselves, but of course the preprocessor always gets run before the compiler. But I was wondering if anyone knew of any other way I could enforce this kind of check at compile-time.

    Read the article

  • Maven copy project output into other project resources

    - by Thomas
    There are two projects: 1) applet project that outputs jar file 2) web app project that should host the jar file. After (1) finished building, the applet jar file should be copied into the webapp folder of (2). The purpose is that (2) will host the applet (1) on the Internet. A lot of examples explain how to use another project as a library dependency. Other examples, show how to use ant plugin to copy files. I am unsure on how to properly set this up, so that 'mvn install' on the parent project will do the copying at the right time.

    Read the article

  • Looking for some thoughts on an image printing app

    - by Alex
    Hey All, Im looking for thoughts/advice. I have an upcoming project (all .net) that will require the following: pulls data once a day from an online service provider based on certain criteria. saves data locally for reference and reporting the data thats pulled will be used to create gift cards. So after the data is loaded, a process will run to generate "virtual cards" and send them to a network printer. Once printed, the system will updated the local data recording a successful or failed print. My initial thought was to create a windows service to pull the data...but then I couldnt decide how I was going to put a "virtual card" together and get it to print. Then I considered doing it as a WPF app. I figure that will give me access to the graphics and printing ability. Maybe neither of these are the right direction....Any ideas or thoughts would be greatly appreciated. Alex

    Read the article

  • Apache: How can i see my localhost on 192.168.1.101 from 192.168.1.102?

    - by takpar
    Hi, I'm running Apache on Ubuntu. My IP address is 192.168.1.101 While http://localhost and http://192.168.1.101 work fine in my PC, I cannot access it from within my laptop using http://192.168.1.102 It's strange. I can ping 192.168.1.101 but I got "The connection has timed out." in browser. I'm using default apache config. so this is what my sites-available/default looks like: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/www/public_html> Options Indexes FollowSymLinks MultiViews #AllowOverride None AllowOverride all Order allow,deny allow from all </Directory> /etc/apache2/posrts.conf NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> my laptop runs Ubuntu as well. so I don't think this is a firewall issue. commands executed in Laptop (192.168.1.102): adp@adp-laptop:~$ ping 192.168.1.101 PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.101: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-laptop:~$ telnet 192.168.1.101 80 Trying 192.168.1.101... telnet: Unable to connect to remote host: Connection timed out commands executed in PC (192.168.1.101): adp@adp-desktop:~$ ps afx | grep http 12672 pts/4 S+ 0:00 | \_ grep --color=auto http adp@adp-desktop:~$ ping 192.168.1.102 PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data. 64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.102: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.102: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.102 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-desktop:~$ telnet 192.168.1.102 80 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused adp@adp-desktop:~$ telnet 192.168.1.102 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused What should i do?

    Read the article

  • Can't make my WCF extension work

    - by Sergio Romero
    I have a WCF solution that consists of the following class libraries: Exercise.Services: Contains the implementation classes for the services. Exercise.ServiceProxy: Contains the classes that are instantiated in the client. Exercise.HttpHost: Contains the services (*.svc files). I'm calling the service from a console application and the "first version" works really well so I took the next step which is to create a custom ServiceHostFactory, ServiceHost, and InstanceProvider so I can use constructor injection in my services as it is explained in this article. These classes are implemented in yet another class library: 4. Exercise.StructureMapWcfExtension Now even though I've modified my service this: <%@ ServiceHost Language="C#" Debug="true" Factory="Exercise.StructureMapWcfExtension.StructureMapServiceHostFactory" Service="Exercise.Services.PurchaseOrderService" %> I always get the following exception: System.ServiceModel.CommunicationException Security negotiation failed because the remote party did not send back a reply in a timely manner. This may be because the underlying transport connection was aborted. It fails in this line of code: public class PurchaseOrderProxy : ClientBase<IPurchaseOrderService>, IPurchaseOrderService { public PurchaseOrderResponse CreatePurchaseOrder(PurchaseOrderRequest purchaseOrderRequest) { return base.Channel.CreatePurchaseOrder(purchaseOrderRequest); //Fails here } } But that is not all, I added a trace to the web.config file and this is the error that appears in the log file: System.InvalidOperationException The service type provided could not be loaded as a service because it does not have a default (parameter-less) constructor. To fix the problem, add a default constructor to the type, or pass an instance of the type to the host. So this means that my ServiceHostFactory is never being hit, I even set a breakpoint in both its constructor and its method and they never get hit. I've added a reference of the StructureMapWcfExtension library to all the other ones (even the console client), one by one to no avail. I also tried to use the option in the host's web.config file to configure the factory like so: <serviceHostingEnvironment> <serviceActivations> <add service="Exercise.Services.PurchaseOrderService" relativeAddress="PurchaseOrderService.svc" factory="Exercise.StructureMapWcfExtension.StructureMapServiceHostFactory"/> </serviceActivations> </serviceHostingEnvironment> That didn't work either. Please I need help in getting this to work so I can incorporate it to our project. Thank you. UPDATE: Here's the service host factory's code: namespace Exercise.StructureMapWcfExtension { public class StructureMapServiceHostFactory : ServiceHostFactory { private readonly Container Container; public StructureMapServiceHostFactory() { Container = new Container(); new ContainerConfigurer().Configure(Container); } protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { return new StructureMapServiceHost(Container, serviceType, baseAddresses); } } public class ContainerConfigurer { public void Configure(Container container) { container.Configure(r => r.For<IPurchaseOrderFacade>().Use<PurchaseOrderFacade>()); } } }

    Read the article

  • Windows Server 2008 R2 network adapter stops working, requires hard reboot

    - by Geoff Dalgas
    TL;DR version: Turns out this was a Windows Server 2008 R2 kernel networking bug. After siccing Microsoft support on it, we (eventually) got an unpublished kernel hotfix from Microsoft to address it. If you, too, are experiencing mysterious low-level network driver failures requiring a reboot/bluescreen cycle, you might want that hotfix (or maybe Service Pack 1 whenever it is released, too.) We have been using HAProxy along with heartbeat from the Linux-HA project. We are using two linux instances to provide a failover. Each server has with their own public IP and a single IP which is shared between the two using a virtual interface (eth1:1) at IP: 69.59.196.211 The virtual interface (eth1:1) IP 69.59.196.211 is configured as the gateway for the windows servers behind them and we use ip_forwarding to route traffic. We are experiencing an occasional network outage on one of our windows servers behind our linux gateways. HAProxy will detect the server is offline which we can verify by remoting to the failed server and attempting to ping the gateway: Pinging 69.59.196.211 with 32 bytes of data: Reply from 69.59.196.220: Destination host unreachable. Running arp -a on this failed server shows that there is no entry for the gateway address (69.59.196.211): Interface: 69.59.196.220 --- 0xa Internet Address Physical Address Type 69.59.196.161 00-26-88-63-c7-80 dynamic 69.59.196.210 00-15-5d-0a-3e-0e dynamic 69.59.196.212 00-21-5e-4d-45-c9 dynamic 69.59.196.213 00-15-5d-00-b2-0d dynamic 69.59.196.215 00-21-5e-4d-61-1a dynamic 69.59.196.217 00-21-5e-4d-2c-e8 dynamic 69.59.196.219 00-21-5e-4d-38-e5 dynamic 69.59.196.221 00-15-5d-00-b2-0d dynamic 69.59.196.222 00-15-5d-0a-3e-09 dynamic 69.59.196.223 ff-ff-ff-ff-ff-ff static 224.0.0.22 01-00-5e-00-00-16 static 224.0.0.252 01-00-5e-00-00-fc static 225.0.0.1 01-00-5e-00-00-01 static On our linux gateway instances arp -a shows: peak-colo-196-220.peak.org (69.59.196.220) at <incomplete> on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-222.peak.org (69.59.196.222) at 00:15:5d:0a:3e:09 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 Why would arp occasionally set the entry for this failed server as <incomplete>? Should we be defining our arp entries statically? I've always left arp alone since it works 99% of the time, but in this one instance it appears to be failing. Are there any additional troubleshooting steps we can take help resolve this issue? THINGS WE HAVE TRIED I added a static arp entry for testing on one of the linux gateways which still didn't help. root@haproxy2:~# arp -a peak-colo-196-215.peak.org (69.59.196.215) at 00:21:5e:4d:61:1a [ether] on eth1 peak-colo-196-221.peak.org (69.59.196.221) at 00:15:5d:00:b2:0d [ether] on eth1 stackoverflow.com (69.59.196.212) at 00:21:5e:4d:45:c9 [ether] on eth1 peak-colo-196-219.peak.org (69.59.196.219) at 00:21:5e:4d:38:e5 [ether] on eth1 peak-colo-196-209.peak.org (69.59.196.209) at 00:26:88:63:c7:80 [ether] on eth1 peak-colo-196-217.peak.org (69.59.196.217) at 00:21:5e:4d:2c:e8 [ether] on eth1 peak-colo-196-220.peak.org (69.59.196.220) at 00:21:5e:4d:30:8d [ether] PERM on eth1 root@haproxy2:~# arp -i eth1 -s 69.59.196.220 00:21:5e:4d:30:8d root@haproxy2:~# ping 69.59.196.220 PING 69.59.196.220 (69.59.196.220) 56(84) bytes of data. --- 69.59.196.220 ping statistics --- 7 packets transmitted, 0 received, 100% packet loss, time 6006ms Rebooting the windows web server solves this issue temporarily with no other changes to the network but our experience shows this issue will come back. Swapping network cards and switches I noticed the link light on the port of the switch for the failed windows server was running at 100Mb instead of 1Gb on the failed interface. I moved the cable to several other open ports and the link indicated 100Mb for each port that I tried. I also swapped the cable with the same result. I tried changing the properties of the network card in windows and the server locked up and required a hard reset after clicking apply. This windows server has two physical network interfaces so I have swapped the cables and network settings on the two interfaces to see if the problem follows the interface. If the public interface goes down again we will know that it is not an issue with the network card. (We also tried another switch we have on hand, no change) Changing network hardware driver versions We've had the same problem with the latest Broadcom driver, as well as the built-in driver that ships in Windows Server 2008 R2. Replacing network cables As a last ditch effort we remembered another change that occurred was the replacement of all of the patch cords between our servers / switch. We had purchased two sets, one green of lengths 1ft - 3ft for the private interfaces and another set of red cables for the public interfaces. We swapped out all of the public interface patch cables with a different brand and ran our servers without issue for a full week ... aaaaaand then the problem recurred. Disable checksum offload, remove TProxy We also tried disabling TCP/IP checksum offload in the driver, no change. We're now pulling out TProxy and moving to a more traditional x-forwarded-for network arrangement without any fancy IP address rewriting. We'll see if that helps. Switch Virtualization providers On the off chance this was related to Hyper-V in some way (we do host Linux VMs on it), we switched to VMWare Server. No change. Switch host model We've reached the end of our troubleshooting rope and are now formally involving Microsoft support. They recommended changing the host model: http://en.wikipedia.org/wiki/Host_model http://technet.microsoft.com/en-us/magazine/2007.09.cableguy.aspx We did that, and.. we'll see.

    Read the article

  • Deploy custom web service to sharepoint server(2007/2010)?

    - by leif
    According to MSDN, for deploying custom web service, we need to create *wsdl.aspx and *disco.aspx files, and put them with .asmx together under _vti_bin folder (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\isapi). And put the dll under bin folder of the root of sharepoint virtual directory. It works correctly for me. However, i also found that if i put .asmx file under the root virtual directory without creating those *wsdl.aspx and *disco.aspx files. It can work as well and much easier than the above way. So i'm wondering what's the potential issues in this way?

    Read the article

  • setting up/installing/configuring nginx LEMP stack on fresh VPS server

    - by grant tailor
    I need some help in settingup/installing and configuring nginx LEMP stack on a fresh new VPS i have. The specs of the CentOS 5.7 VPS are 2GB DDR3 ECC RAM(4GB burst), 1 core 1.5Ghz(3Ghz burst) and 100GB RAID 10 storage, unmetered bandwidth @ 100Mpbs all for a whopping $25/month(unbeatable, yeah i know :) Anyways i have followed this LEMP(will also need MySQL and PHP) stack guide on linode http://library.linode.com/lemp-guides/centos-5 but basically what i want is to be able to host multiple website on this webserver after everything is setup. I am used to using DirectAdmin control panel on other server and want to have things setup so i can host multiple websites...mostly wordpress and drupal themes. Lets say 10 websites on this nginx web server. So can someone please help me on what i need to do to take "full" advantage of nginx power and performance, while been able to easily manage these multiple websites (wordpress and drupal themes)? Thanks.

    Read the article

  • How to convert an existing callback interface to use boost signals & slots

    - by the_mandrill
    I've currently got a class that can notify a number of other objects via callbacks: class Callback { virtual NodulesChanged() =0; virtual TurkiesTwisted() =0; }; class Notifier { std::vector<Callback*> m_Callbacks; void AddCallback(Callback* cb) {m_Callbacks.push(cb); } ... void ChangeNodules() { for (iterator it=m_Callbacks.begin(); it!=m_Callbacks.end(); it++) { (*it)->NodulesChanged(); } } }; I'm considering changing this to use boost's signals and slots as it would be beneficial to reduce the likelihood of dangling pointers when the callee gets deleted, among other things. However, as it stands boost's signals seems more oriented towards dealing with function objects. What would be the best way of adapting my code to still use the callback interface but use signals and slots to deal with the connection and notification aspects?

    Read the article

  • Silence output from SimpleXMLRPCServer

    - by Corey Goldberg
    I am running an xml-rpc server using SimpleXMLRPCServer from the stdlib. My code looks something like this: import SimpleXMLRPCServer import socket class RemoteStarter: def start(self): return 'foo' rs = RemoteStarter() host = socket.gethostbyaddr(socket.gethostname())[0] port = 9000 server = SimpleXMLRPCServer.SimpleXMLRPCServer((host, port)) server.register_instance(rs) server.serve_forever() every time the 'start' method gets called remotely, the server prints an access line like this: <server_name> - - [10/Mar/2010 13:06:20] "POST /RPC2 HTTP/1.0" 200 - I can't figure out a way to silence the output so it doesn't print these access lines to stdout. anyone?

    Read the article

  • SYN flooding still a threat to servers?

    - by Rob
    Well recently I've been reading about different Denial of Service methods. One method that kind of stuck out was SYN flooding. I'm a member of some not-so-nice forums, and someone was selling a python script that would DoS a server using SYN packets with a spoofed IP address. However, if you sent a SYN packet to a server, with a spoofed IP address, the target server would return the SYN/ACK packet to the host that was spoofed. In which case, wouldn't the spoofed host return an RST packet, thus negating the 75 second long-wait, and ultimately failing in its attempt to DoS the server?

    Read the article

  • mongoid with rails - Database should be a Mongo::DB, not NilClass"

    - by Adam T
    Greetings I am trying to get Mongoid to work with my Rails app and I am getting an error: "Mongoid::Errors::InvalidDatabase in 'Shipment bol should be unique' Database should be a Mongo::DB, not NilClass" I have created the mongoid.yml file in my config directory and have mongodb running as a daemon. The config file is like so: defaults: &defaults host: localhost development: <<: *defaults database: ship-it-development test: <<: *defaults database: ship-it-test production: <<: *defaults host: <%= ENV['MONGOID_HOST'] % port: <%= ENV['MONGOID_PORT'] % database: <%= ENV['MONGOID_DATABASE'] % All of my specs fail with the above error. I am using rails 2.3.8. Anyone have ideas?

    Read the article

  • Accurate Timings with Oscilloscopes on PC

    - by Paul Bullough
    In the world of embedded software (firmware) it is fairly common to observe the order of events, take timings and optimise a program by getting it to waggle PIO lines and capturing their behavior on an oscilloscope. In days gone by it was possible to toggle pins on the serial and parallel ports to achieve much the same thing on PC-based software. This made it possible to capture host PC-based software events and firmware events on the same trace and examine host software/firmware interactions. Now, my new laptop ... no serial or parallel ports! This is increasingly the case. So, does anyone have any suggestions as to go about emitting accurate timing signals off a "modern" PC? It strikes me that we don't have any immediately programmable, lag-free output pins left. The solution needs to run off a laptop, so using add-on cards that only plug into desktops are not permitted.

    Read the article

  • Errors when connecting to HTTPS using HTTP::Net routines (Ruby on Rails)

    - by jaycode
    Hi all, the code below explains the problem in detail #this returns error Net::HTTPBadResponse url = URI.parse('https://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this works url = URI.parse('http://sitename.com') response = Net::HTTP.start(url.host, url.port) {|http| http.get('/remote/register_device') } #this returns error Net::HTTPBadResponse response = Net::HTTP.post_form(URI.parse('https://sitename.com/remote/register_device'), {:text => 'hello world'}) #this returns error Errno::ECONNRESET (Connection reset by peer) response = Net::HTTP.post_form(URI.parse('https://sandbox.itunes.apple.com/verifyReceipt'), {:text => 'hello world'}) #this works response = Net::HTTP.post_form(URI.parse('http://sitename.com/remote/register_device'), {:text => 'hello world'}) So... how do I send POST parameters to https://sitename.com or https://sandbox.itunes.apple.com/verifyReceipt in this example? Further information, I am trying to get this working in Rails: http://developer.apple.com/iphone/library/documentation/NetworkingInternet/Conceptual/StoreKitGuide/VerifyingStoreReceipts/VerifyingStoreReceipts.html#//apple_ref/doc/uid/TP40008267-CH104-SW1

    Read the article

  • What is the typical setup for a laptop used for multi platform development?

    - by iama
    I am planning to build a new laptop for development for both Windows & Linux platforms. On Windows, my development would be primarily on .NET/C#/IIS/MSSQL Server. On Linux, preferably Ubuntu, my development would be on Ruby and Python. I am thinking of buying a laptop with Windows 7 pre-installed with 4GB RAM/Intel Core 2 Duo/320 GB HD & then thinking of running 2 VMs for both Windows and Linux development with the host OS as my work station. Of course, I would be running DBs and web servers on the respective platforms. Is this a typical setup? My only concern is running two VMs side by side. Not sure if this configuration would be optimal. Alternative would be to do my Windows development on the host Windows 7 OS. Any thoughts?

    Read the article

  • How Easy Is It to Hijack Session Vars on GoDaddy (PHP)

    - by yar
    This article states that If your site is run on a shared Web server, be aware that any session variables can easily be viewed by any other users on the same server. On a larger host like GoDaddy, are there really no protections in place against this? Could it really be that easy? If it is that easy, where are the session vars of the other users on my host so I can check them out? Edit: I didn't believe it, but here's my little program which shows that this is true! I wonder if those are really the same as the value stored in the cookies on the users' machine?

    Read the article

  • "The provider is not compatible with the version of Oracle client"

    - by psyb0rg
    I just put my asp .net web service on a remote host. The service accesses an oracle db on my local machine. The service worked fine when it was running on localhost but since moving to a remote hos, I get The provider is not compatible with the version of Oracle client at Oracle.DataAccess.Client.OracleInit.Initialize() at Oracle.DataAccess.Client.OracleConnection..cctor() --- End of inner exception stack trace --- at Oracle.DataAccess.Client.OracleConnection..ctor(String connectionString) at.... I know the error is related to the data provider version running on the server/client. In my case, the only dll I referenced in the project was Oracle.DataAccess So how do I go about solving this? Note that I won't be able to change anything on the web host other than my own project. My local machine is running Oracle 11g Thanks.

    Read the article

  • Webserver parsing chrome input from post request

    - by ravenspoint
    I am developing a small embedded web server. I want to add parsing of post requests, but I am having a problem with input password fields from Chrome. Firefox and IE work perfectly. The HTML: <form action=start.webem method=post> <input value="START" type=submit /><!--#webem start --> <p>Password: <input TYPE=PASSWORD name=yourname AUTOCOMPLETE=OFF /><br> </form> From Firefox I get POST /stop.webem HTTP/1.1 Host: 127.0.0.1:8080 User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.9) Gecko/20100315 Firefox/3.5.9 (.NET CLR 3.5.30729) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Referer: http://127.0.0.1:8080/ Content-Type: application/x-www-form-urlencoded Content-Length: 13 yourname=test However from Chrome, about 90% of the time, the yourname=test is missing POST /start.webem HTTP/1.1 Host: 127.0.0.1:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5 Referer: http://127.0.0.1:8080/ Content-Length: 13 Cache-Control: max-age=0 Origin: http://127.0.0.1:8080 Content-Type: application/x-www-form-urlencoded Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 Though, occasionally it does work!!! POST /start.webem HTTP/1.1 Host: 127.0.0.1:8080 Connection: keep-alive User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1045 Safari/532.5 Referer: http://127.0.0.1:8080/start.webem Content-Length: 13 Cache-Control: max-age=0 Origin: http://127.0.0.1:8080 Content-Type: application/x-www-form-urlencoded Accept: application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 yourname=test I cannot find what causes it to work sometimes.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >