Search Results

Search found 16822 results on 673 pages for 'custom protocol'.

Page 632/673 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Svcutil generating bad config with multiple endpoints

    - by vfilby
    I have a WCF service that has exposed a soap and an xml endpoint. When I use svcutil to generate the proxy code on the client side the generated configuration contains two endpoints which causes the client to fail. If I edit the web.config file and remove the second endpoint (with the custom binding) all works as expected. Is there a way I can get svcutil to generate a config that just works so that I don't need to hand edit the file everytime? Client-side error: An endpoint configuration section for contract 'MyNamespace.ITestService' could not be loaded because more than one endpoint configuration for that contract was found. Please indicate the preferred endpoint configuration section by name. Svcutil command: svcutil http://api.local/Test.svc /reference:bin\MyNamespace.Interface.dll /config:web.config /mergeConfig /out:"Service References\TestService.cs" /n:*,MyNamespace Generated client config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ITestService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> <customBinding> <binding name="CustomBinding_ITestService"> <textMessageEncoding maxReadPoolSize="64" maxWritePoolSize="16" messageVersion="Soap12" writeEncoding="utf-8"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> </textMessageEncoding> </binding> </customBinding> </bindings> <client> <endpoint address="http://api2.local/Test.svc/soap" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_ITestService" contract="MyNamespace.ITestService" name="BasicHttpBinding_ITestService" /> <endpoint binding="customBinding" bindingConfiguration="CustomBinding_ITestService" contract="MyNamespace.ITestService" name="CustomBinding_ITestService" /> </client> </system.serviceModel>

    Read the article

  • Trouble connecting to vsftpd on ubuntu server

    - by littleK
    I have installed Ubuntu Server 10.10 and I am using it to host a domain that I have. I am trying to set up FTP for the server, but I am running into some problems. I have successfully installed vsFTPd and I have opened up ports 20, 21 on my firewall. In my vsFTPd configuration, I have enabled SSL. Every time I try to connect to my server via FTP, I receive a "Connection Refused" error. I have had a little more success with SSL disabled, however the connection process will time out after the LIST command (but it does accept my authentication). Here is my vsFTPd configuration, the SSL stuff is at the bottom: # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=NO # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) #local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. #anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. #xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: #ftpd_banner=Welcome to blah FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. #chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem # SSL ssl_enable=YES allow_anon_ssl=NO force_local_data_ssl=YES force_local_logins_ssl=YES ssl_tlsv1=YES ssl_sslv2=YES ssl_sslv3=YES Thanks!

    Read the article

  • Rails 3 and Bootstrap 2.1.0 - can't fix my footer

    - by ExiRe
    I have Rails application with bootstrap 2.1.0 (i use twitter-bootstrap-rails gem for that). But i can't get working footer. It is not visible unless i scroll down the page. I can't get how to fix that. Application.html.haml !!! %html %head %title MyApp = stylesheet_link_tag "application", :media => "all" = javascript_include_tag "application" = csrf_meta_tags %meta{ :name => "viewport", :content => "width=device-width, initial-scale=1.0" } %body %div{ :class => "wrapper" } = render 'layouts/navbar_template' %div{ :class => "container-fluid" } - flash.each do |key, value| = content_tag( :div, value, :class => "alert alert-#{key}" ) %div{ :class => "row-fluid" } %div{:class => "span10"} =yield %div{:class => "span2"} %h2 Test sidebar %footer{ :class => "footer" } = debug(params) if Rails.env.development? bootstrap_and_overrides.css.less @import "twitter/bootstrap/bootstrap"; body { padding-top: 60px; } @import "twitter/bootstrap/responsive"; // Set the correct sprite paths @iconSpritePath: asset-path('twitter/bootstrap/glyphicons-halflings.png'); @iconWhiteSpritePath: asset-path('twitter/bootstrap/glyphicons-halflings-white.png'); // Set the Font Awesome (Font Awesome is default. You can disable by commenting below lines) // Note: If you use asset_path() here, your compiled boostrap_and_overrides.css will not // have the proper paths. So for now we use the absolute path. @fontAwesomeEotPath: '/assets/fontawesome-webfont.eot'; @fontAwesomeWoffPath: '/assets/fontawesome-webfont.woff'; @fontAwesomeTtfPath: '/assets/fontawesome-webfont.ttf'; @fontAwesomeSvgPath: '/assets/fontawesome-webfont.svg'; // Font Awesome @import "fontawesome"; // Your custom LESS stylesheets goes here // // Since bootstrap was imported above you have access to its mixins which // you may use and inherit here // // If you'd like to override bootstrap's own variables, you can do so here as well // See http://twitter.github.com/bootstrap/less.html for their names and documentation // // Example: // @linkColor: #ff0000; //MY CSS IS HERE. html, body { height: 100%; } footer { color: #666; background: #F5F5F5; padding: 17px 0 18px 0; border-top: 1px solid #000; } footer a { color: #999; } footer a:hover { color: #efefef; } .wrapper { min-height: 100%; height: auto !important; height: 10px; margin-bottom: -10px; }

    Read the article

  • UIScrollView ImageView with pins on top

    - by Koppo
    To all, I have a UIScrollView which has a UIImageView. I want to show pins on this imageView. When I add pins as subviews of the ImageView everything is great except for when you zoom the scale transform happens on the pins also. I don't want this behavior and want my pins to stay the same. So I choose to add the Pins to another view which sits on top of the ImageView and is also a subview of the UIScrollView. The idea here if you will imagine is to have a layer which hovers over the map and won't scale yet show pins over where I plot them. The pin when added to the layer view don't cale if the ImageView scales. However the issue then bceomes the position of the pins doesn't match the original origin x/y as the ImageView has had a scale transform. Basically this is a custom map of a place with Pins. I am trying to have the Pins float over and not zoom in and out over my ImageView yet remember where I placed them when the zoom happens. Some code scrollView = [[UIScrollView alloc] initWithFrame:viewRect]; scrollView.delegate = self; scrollView.pagingEnabled = NO; scrollView.scrollsToTop = NO; [scrollView setBackgroundColor:[UIColor clearColor]]; scrollView.clipsToBounds = YES; // default is NO, we want to restrict drawing within our scrollview scrollView.bounces = YES; scrollView.autoresizingMask = UIViewAutoresizingFlexibleHeight; scrollView.indicatorStyle = UIScrollViewIndicatorStyleWhite; imageViewMap = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"]]; imageViewMap.userInteractionEnabled = YES; viewRect = CGRectMake(0,0,imageViewMap.image.size.width,imageViewMap.image.size.height); //viewRect = CGRectMake(0,0,2976,3928); [scrollView addSubview:imageViewMap]; [scrollView setContentSize:CGSizeMake(viewRect.size.width, viewRect.size.height)]; iconsView = [[UIView alloc] initWithFrame:imageViewMap.frame]; [scrollView addSubview:iconsView]; Code to add Pin later on some event. [iconsView addSubview:pinIcon]; I am stuck in trying tp figure out how to to get my pins to hover on the map without moving when the scale happens. Thanks

    Read the article

  • Eclipse: How to convert a web project into an AspectJ project and weave and run it using the AJDT pl

    - by Kent
    What I want to do: I want to use the @Configured annotation with Spring. It requires AspectJ to be enabled. I thought that using the AJDT plugin for compile time weaving would solve this problem. Before installing the plug in the dependencies which were supposed to be injected into my @Configured object remained null. What I have done: Installed the AJDT: AspectJ Development Tools plug in for Eclipse 3.4. Right clicked on my web project and converted it into a AspectJ project. Enabled compile time weaving. What doesn't work: When I start the Tomcat 6 server now, I get an exception*. Other information: I haven't configured anything in the AspectJ Build and AspectJ Compiler parts of the project properties. JDT Weaving under Preferences says weaving is enabled. I still have Java build path and Java Compiler under project properties. And they look like I previously configured them (while the above two new entries are not configured). The icon of my @Configured object file looks like any other file (i.e. no indication of any aspect or such, which I think there should be). The file name is MailNotification.java (and not .aj), but I guess it should still work as I'm using a Spring annotation for AspectJ? I haven't found any tutorial or similar which teaches: How to turn a Spring web application project into an AspectJ project and weave aspects into the files using the AJDT plugin, all within Eclipse 3.4. If there is anything like that out there I would be very interested in knowing about it. What I would like to know: Where to go from here? I just want to use the @Configured annotation of Spring. I'm also using @Transactional which I think also needs AspectJ. If it is possible I would like to study AspectJ as little as possible as long as my needs are met. The subject seems interesting, but huge, all I want to do is use the above two mentioned Spring annotations. *** Exception when Tomcat 6 is started: Caused by: java.lang.IllegalStateException: ClassLoader [org.apache.catalina.loader.WebappClassLoader] does NOT provide an 'addTransformer(ClassFileTransformer)' method. Specify a custom LoadTimeWeaver or start your Java virtual machine with Spring's agent: -javaagent:spring-agent.jar at org.springframework.context.weaving.DefaultContextLoadTimeWeaver.setBeanClassLoader(DefaultContextLoadTimeWeaver.java:82) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1322) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:473) ... 41 more

    Read the article

  • How to create a Uri instance parsed with GenericUriParserOptions.DontCompressPath

    - by Andrew Arnott
    When the .NET System.Uri class parses strings it performs some normalization on the input, such as lower-casing the scheme and hostname. It also trims trailing periods from each path segment. This latter feature is fatal to OpenID applications because some OpenIDs (like those issued from Yahoo) include base64 encoded path segments which may end with a period. How can I disable this period-trimming behavior of the Uri class? Registering my own scheme using UriParser.Register with a parser initialized with GenericUriParserOptions.DontCompressPath avoids the period trimming, and some other operations that are also undesirable for OpenID. But I cannot register a new parser for existing schemes like HTTP and HTTPS, which I must do for OpenIDs. Another approach I tried was registering my own new scheme, and programming the custom parser to change the scheme back to the standard HTTP(s) schemes as part of parsing: public class MyUriParser : GenericUriParser { private string actualScheme; public MyUriParser(string actualScheme) : base(GenericUriParserOptions.DontCompressPath) { this.actualScheme = actualScheme.ToLowerInvariant(); } protected override string GetComponents(Uri uri, UriComponents components, UriFormat format) { string result = base.GetComponents(uri, components, format); // Substitute our actual desired scheme in the string if it's in there. if ((components & UriComponents.Scheme) != 0) { string registeredScheme = base.GetComponents(uri, UriComponents.Scheme, format); result = this.actualScheme + result.Substring(registeredScheme.Length); } return result; } } class Program { static void Main(string[] args) { UriParser.Register(new MyUriParser("http"), "httpx", 80); UriParser.Register(new MyUriParser("https"), "httpsx", 443); Uri z = new Uri("httpsx://me.yahoo.com/b./c.#adf"); var req = (HttpWebRequest)WebRequest.Create(z); req.GetResponse(); } } This actually almost works. The Uri instance reports https instead of httpsx everywhere -- except the Uri.Scheme property itself. That's a problem when you pass this Uri instance to the HttpWebRequest to send a request to this address. Apparently it checks the Scheme property and doesn't recognize it as 'https' because it just sends plaintext to the 443 port instead of SSL. I'm happy for any solution that: Preserves trailing periods in path segments in Uri.Path Includes these periods in outgoing HTTP requests. Ideally works with under ASP.NET medium trust (but not absolutely necessary).

    Read the article

  • Please Explain Drupal schema and drupal_write_record

    - by Aaron
    Hi. A few questions. 1) Where is the best place to populate a new database table when a module is first installed, enabled? I need to go and get some data from an external source and want to do it transparently when the user installs/enables my custom module. I create the schema in {mymodule}_schema(), do drupal_install_schema({tablename}); in hook_install. Then I try to populate the table in hook_enable using drupal_write_record. I confirmed the table was created, I get no errors when hook_enable executes, but when I query the new table, I get no rows back--it's empty. Here's one variation of the code I've tried: /** * Implementation of hook_schema() */ function ncbi_subsites_schema() { // we know it's MYSQL, so no need to check $schema['ncbi_subsites_sites'] = array( 'description' => 'The base table for subsites', 'fields' => array( 'site_id' => array( 'description' => 'Primary id for site', 'type' => 'serial', 'unsigned' => TRUE, 'not null' => TRUE, ), // end site_id 'title' => array( 'description' => 'The title of the subsite', 'type' => 'varchar', 'length' => 255, 'not null' => TRUE, 'default' => '', ), //end title field 'url' => array( 'description' => 'The URL of the subsite in Production', 'type' => 'varchar', 'length' => 255, 'default' => '', ), //end url field ), //end fields 'unique keys' => array( 'site_id'=> array('site_id'), 'title' => array('title'), ), //end unique keys 'primary_key' => array('site_id'), ); // end schema return $schema; } Here's hook_install: function ncbi_subsites_install() { drupal_install_schema('ncbi_subsites'); } Here's hook_enable: function ncbi_subsites_enable() { drupal_get_schema('ncbi_subsites_site'); // my helper function to get data for table (not shown) $subsites = ncbi_subsites_get_subsites(); foreach( $subsites as $name=>$attrs ) { $record = new stdClass(); $record->title = $name; $record->url = $attrs['homepage']; drupal_write_record( 'ncbi_subsites_sites', $record ); } } Can someone tell me what I'm missing?

    Read the article

  • PHP/Java Bridge java.lang.NoSuchMethodException

    - by m1sk
    I have setup PHP/Java Bridge with working examples in netbeans tomcat directory. What doesnt work is using custom JAR Here is my code: package com.micha; public class Hello1Bean { public Hello1Bean() {} String hi() {return "This is my hello message";} String hello(String name) {return "Hello" + name;} } And the php code <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php require_once ("java/Java.inc"); $world = new Java("com.micha.Hello1Bean"); echo java_values($world->hi()); echo "Hello Working Thingy\n\n"; ?> </body> </html> When I check http://localhost:8084/JavaBridge/mytest.php: javax.servlet.ServletException: java.lang.RuntimeException: PHP Fatal error: Uncaught [[o:Exception]:"java.lang.Exception: Invoke failed: [[o:Hello1Bean]]->hi. Cause: java.lang.NoSuchMethodException: hi(). Candidates: [] VM: 1.6.0_25@http://java.sun.com/" at: #-6 php.java.bridge.JavaBridge.checkM(JavaBridge.java:1085) #-5 php.java.bridge.JavaBridge.Invoke(JavaBridge.java:1024) #-4 php.java.bridge.Request.handleRequest(Request.java:417) #-3 php.java.bridge.Request.handleRequests(Request.java:500) #-2 php.java.bridge.http.ContextRunner.run(ContextRunner.java:145) #-1 php.java.bridge.ThreadPool$Delegate.run(ThreadPool.java:60) #0 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(232): java_ThrowExceptionProxyFactory->getProxy(2, 'com.micha.Hello...', 'T', true) #1 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(360): java_Arg->getResult(true) #2 C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc(366): java_Client->getWrappedResult(true) #3 C:\Users\Micha\.netbean in C:\Users\Micha\.netbeans\7.1.2\apache-tomcat-7.0.22.0_base\webapps\JavaBridge\java\Java.inc on line 195 php.java.servlet.fastcgi.FastCGIServlet.handle(FastCGIServlet.java:499) php.java.servlet.fastcgi.FastCGIServlet.doGet(FastCGIServlet.java:521) javax.servlet.http.HttpServlet.service(HttpServlet.java:621) javax.servlet.http.HttpServlet.service(HttpServlet.java:722) org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:393) php.java.servlet.PhpCGIFilter.doFilter(PhpCGIFilter.java:126) Before I had a ClassNotFoundException so it knows there is a a class but for some odd reason I cant call the function. ( if I replace "com.micha.Hello1Bean" with some class that doesnt exist then I get that exception)

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • Can't Get Virtual Users Setup in VSFTPD -Tried Everything

    - by N.T.
    Have Ubuntu 11.10 with vsftpd installed and working. Can not get virtual users setup at all? Vsftpd will allow main Ubuntu owner account to login, but nothing else? I've followed several tutorials on adding virtual users, but nothing works? I just need to add 2 virtual users and have them be able to upload files to vsftpd Ubuntu computer from other computers on my Lan network. Everywhere I've looked, people just point toward tutorials on adding virtual users, but that just is NOT working. I've been struggling with this for over a week now! PLEASE Help. Thanks. I'll even give a donation if someone can figure this out. here is the vsftpd.conf file I am using. I copied the original, and make a new one, every time I try a tutorial. So far, none have worked. Here is the vsftpd.conf file I'm using. (I hope this helps?) # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to Sage FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd local_root=/media/FilesDrive # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • python Socket.IO client for sending broadcast messages to TornadIO2 server

    - by Alp
    I am building a realtime web application. I want to be able to send broadcast messages from the server-side implementation of my python application. Here is the setup: socketio.js on the client-side TornadIO2 server as Socket.IO server python on the server-side (Django framework) I can succesfully send socket.io messages from the client to the server. The server handles these and can send a response. In the following i will describe how i did that. Current Setup and Code First, we need to define a Connection which handles socket.io events: class BaseConnection(tornadio2.SocketConnection): def on_message(self, message): pass # will be run if client uses socket.emit('connect', username) @event def connect(self, username): # send answer to client which will be handled by socket.on('log', function) self.emit('log', 'hello ' + username) Starting the server is done by a Django management custom method: class Command(BaseCommand): args = '' help = 'Starts the TornadIO2 server for handling socket.io connections' def handle(self, *args, **kwargs): autoreload.main(self.run, args, kwargs) def run(self, *args, **kwargs): port = settings.SOCKETIO_PORT router = tornadio2.TornadioRouter(BaseConnection) application = tornado.web.Application( router.urls, socket_io_port = port ) print 'Starting socket.io server on port %s' % port server = SocketServer(application) Very well, the server runs now. Let's add the client code: <script type="text/javascript"> var sio = io.connect('localhost:9000'); sio.on('connect', function(data) { console.log('connected'); sio.emit('connect', '{{ user.username }}'); }); sio.on('log', function(data) { console.log("log: " + data); }); </script> Obviously, {{ user.username }} will be replaced by the username of the currently logged in user, in this example the username is "alp". Now, every time the page gets refreshed, the console output is: connected log: hello alp Therefore, invoking messages and sending responses works. But now comes the tricky part. Problems The response "hello alp" is sent only to the invoker of the socket.io message. I want to broadcast a message to all connected clients, so that they can be informed in realtime if a new user joins the party (for example in a chat application). So, here are my questions: How can i send a broadcast message to all connected clients? How can i send a broadcast message to multiple connected clients that are subscribed on a specific channel? How can i send a broadcast message anywhere in my python code (outside of the BaseConnection class)? Would this require some sort of Socket.IO client for python or is this builtin with TornadIO2? All these broadcasts should be done in a reliable way, so i guess websockets are the best choice. But i am open to all good solutions.

    Read the article

  • HTTP 400 Bad Request error attempting to add web reference to WCF Service

    - by c152driver
    I have been trying to port a legacy WSE 3 web service to WCF. Since maintaining backwards compatibility with WSE 3 clients is the goal, I've followed the guidance in this article. After much trial and error, I can call the WCF service from my WSE 3 client. However, I am unable to add or update a web reference to this service from Visual Studio 2005 (with WSE 3 installed). The response is "The request failed with HTTP status 400: Bad Request". I get the same error trying to generate the proxy using the wsewsdl3 utility. I can add a Service Reference using VS 2008. Any solutions or troubleshooting suggestions? Here are the relevant sections from the config file for my WCF service. <system.serviceModel> <services> <service behaviorConfiguration="MyBehavior" name="MyService"> <endpoint address="" binding="customBinding" bindingConfiguration="wseBinding" contract="IMyService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <customBinding> <binding name="wseBinding"> <security authenticationMode="UserNameOverTransport" /> <mtomMessageEncoding messageVersion="Soap11WSAddressingAugust2004" /> <httpsTransport/> </binding> </customBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="MyCustomValidator" /> </serviceCredentials> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="MyRoleProvider" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> </system.serviceModel>

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • UIScrollView strange zoom behavior when content is a UIView subclass

    - by sigsegv
    Hi, I'm experiencing the following: I created a UIView subclass with a CATiledLayer as backing layer by overriding the layerClass method. The layer properties (delegate, tileSize, etc) are set in the initWithFrame: method of the subclass. +(Class)layerClass { return [CATiledLayer class]; } -(id)initWithFrame:(CGRect)frame { if(self = [super initWithFrame:frame]) { renderer = [[MFPDFRenderer alloc]init]; tiledLayer = (CATiledLayer *)[self layer]; [tiledLayer setFrame:frame]; [tiledLayer setLevelsOfDetail:2]; [tiledLayer setLevelsOfDetailBias:3]; [tiledLayer setTileSize:CGSizeMake(512, 512)]; [tiledLayer setDelegate:renderer]; } return self; } Then I add an instance of said class as the content of an UIScrollView and set UIScrollView properties and implement the required delegate's methods. Everything works fine but when zooming the scroll view keep repositioning itself on its center. It's hardly noticeable when zooming in the center of the content, but unbearable otherwise. The same scroll view works fine when I use as (zoomable) content any other view such as an UIImageView or even a normal UIView with a CATiledLayer with the same properties and delegate of the subclass implementation as sublayer. When I check layer bounds and frame in the drawLayer:inContext: method of the delegate I get the following result as the zoom increase UIView with CATiledLayer as sublayer: 2010-04-03 21:05:33.499 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.500 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.529 Renderer[89293:4903] Layer: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:05:33.534 Renderer[89293:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 Custom subclass: 2010-04-03 21:04:15.969 Renderer[88957:4903] Layer: (0.000, 0.000) 657.910 x 945.746 2010-04-03 21:04:15.970 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 2010-04-03 21:04:17.428 Renderer[88957:4903] Layer: (-0.000, 0.000) 766.964 x 1102.510 2010-04-03 21:04:17.429 Renderer[88957:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 [...] 2010-04-03 21:19:10.388 Renderer[92573:4903] Layer: (-0.000, 0.000) 905.680 x 1301.916 2010-04-03 21:19:10.388 Renderer[92573:4903] Bounds: (0.000, 0.000) 320.000 x 460.000 I suppose that's the culprit or at least another symptom. I can add that I get the same erratic behavior if my subclass is built over a standard CALayer with the same renderer. Any suggestion will be appreciated!

    Read the article

  • Adding simple marker clusterer to google map

    - by take2
    Hi, I'm having problems with adding marker clusterer functionality to my map. What I want is to use custom icon for my markers and every marker has its own info window which I want to be able to edit. I did accomplish that, but now I have problems adding marker clusterer library functionality. I read something about adding markers to array, but I'm not sure what would it exactly mean. Besides, all of the examples with array I have found, don't have info windows and searching through the code I didn't find appropriate way to add them. Here is my code (mostly from Geocodezip.com): <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript" src="http://google-maps-utility-library-v3.googlecode.com/svn/trunk/markerclusterer/src/markerclusterer.js"></script> <style type="text/css"> html, body { height: 100%; } </style> <script type="text/javascript"> //<![CDATA[ var map = null; function initialize() { var myOptions = { zoom: 8, center: new google.maps.LatLng(43.907787,-79.359741), mapTypeControl: true, mapTypeControlOptions: {style: google.maps.MapTypeControlStyle.DROPDOWN_MENU}, navigationControl: true, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); var mcOptions = {gridSize: 50, maxZoom: 15}; var mc = new MarkerClusterer(map, [], mcOptions); google.maps.event.addListener(map, 'click', function() { infowindow.close(); }); // Add markers to the map // Set up three markers with info windows var point = new google.maps.LatLng(43.65654,-79.90138); var marker1 = createMarker(point,'Abc'); var point = new google.maps.LatLng(43.91892,-78.89231); var marker2 = createMarker(point,'Abc'); var point = new google.maps.LatLng(43.82589,-79.10040); var marker3 = createMarker(point,'Abc'); var markerArray = new Array(marker1, marker2, marker3); mc.addMarkers(markerArray, true); } var infowindow = new google.maps.InfoWindow( { size: new google.maps.Size(150,50) }); function createMarker(latlng, html) { var image = '/321.png'; var contentString = html; var marker = new google.maps.Marker({ position: latlng, map: map, icon: image, zIndex: Math.round(latlng.lat()*-100000)<<5 }); google.maps.event.addListener(marker, 'click', function() { infowindow.setContent(contentString); infowindow.open(map,marker); }); } //]]> </script>

    Read the article

  • Problem retrieving HTML5 video duration

    - by drebabels
    UPDATE: Ok so although I haven't solved this problem exactly, but I did figure out a work around that handles my biggest concern... the user experience. First the video doesn't begin loading until after the viewer hits the play button, so I am assuming that the duration information wasn't available to be pulled (I don't know how to fix this particular issue... although I assume that it would involve just loading the video metadata separately from the video, but I don't even know if that is possible). So to get around the fact that there is no duration data, I decided to hide the duration info (and actually the entire control) completely until you hit play. I know... its cheating. But for now it makes me happy :) That said... if anyone knows how to load the video metadata separately from the video file... please share. I think that should completely solve this problem. I am working on building a HTML5 video player with a custom interface, but I am having some problems getting the video duration information to display. My HTML is real simple (see below) <video id="video" poster="image.jpg" controls> <source src="video_path.mp4" type="video/mp4" /> <source src="video_path.ogv" type="video/ogg" /> </video> <ul class="controls"> <li class="time"><p><span id="timer">0</span> of <span id="duration">0</span></p></li> </ul> And the javascript I am using to get and insert the duration is var duration = $('#duration').get(0); var vid_duration = Math.round(video.duration); duration.firstChild.nodeValue = vid_duration; The problem is nothing happens. I know the video file has the duration data because if I just use the default controls, it displays fine. But the real strange thing is if I put alert(duration) in my code like so alert(duration); var vid_duration = Math.round(video.duration); duration.firstChild.nodeValue = vid_duration; then is works fine (minus the annoying alert that pops up). Any ideas what is happening here or how I can fix it?

    Read the article

  • Activity Indicator not displaying based on whether the UIWebView is loading or not...

    - by Jack W-H
    Hi folks Sorry if this is an easy one. Basically, here is my code: MainViewController.h: // // MainViewController.h // Site // // Created by Jack Webb-Heller on 19/03/2010. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "FlipsideViewController.h" @interface MainViewController : UIViewController <UIWebViewDelegate, FlipsideViewControllerDelegate> { IBOutlet UIWebView *webView; IBOutlet UIActivityIndicatorView *spinner; } - (IBAction)showInfo; @property(nonatomic,retain) UIWebView *webView; @property(nonatomic,retain) UIActivityIndicatorView *spinner; @end MainViewController.m: // // MainViewController.m // Site // // Created by Jack Webb-Heller on 19/03/2010. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "MainViewController.h" #import "MainView.h" @implementation MainViewController @synthesize webView; @synthesize spinner; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSURL *siteURL; NSString *siteURLString; siteURLString=[[NSString alloc] initWithString:@"http://www.site.com"]; siteURL=[[NSURL alloc] initWithString:siteURLString]; [webView loadRequest:[NSURLRequest requestWithURL:siteURL]]; [siteURL release]; [siteURLString release]; [super viewDidLoad]; } - (void)flipsideViewControllerDidFinish:(FlipsideViewController *)controller { [self dismissModalViewControllerAnimated:YES]; } - (void)webViewDidFinishLoad:(UIWebView *)webView { [spinner stopAnimating]; spinner.hidden=FALSE; NSLog(@"viewDidFinishLoad went through nicely"); } - (void)webViewDidStartLoad:(UIWebView *)webView { [spinner startAnimating]; spinner.hidden=FALSE; NSLog(@"viewDidStartLoad seems to be working"); } - (IBAction)showInfo { FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [spinner release]; [webView release]; [super dealloc]; } @end Unfortunately nothing is ever written to my log, and for some reason the Activity Indicator never seems to appear. What's going wrong here? Thanks folks Jack

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • Can't override a global WPF style that is set by TargetType on a single specific control

    - by Matt H.
    I have a style applied to all my textboxes, defined in a resource dictionary.. <Style TargetType="TextBlock"> <Setter Property="TextBlock.FontSize" Value="{Binding Source={StaticResource ApplicationUserSettings}, Path=fontSize, Mode=OneWay}" /> <Setter Property="TextBlock.TextWrapping" Value="Wrap" /> <Setter Property="TextBlock.VerticalAlignment" Value="Center"/> <Setter Property="Background" Value="Transparent"/> <Setter Property="TextBox.FontFamily" Value="{Binding Source={StaticResource ApplicationUserSettings}, Path=fontName, Mode=OneWay}"/> </Style>\ The fontsize and fontstyle properties are bound to a special user settings class that implements iNotifyPropertyChanged, which allows changes to font size and fontfamily to immediately propogate throughout my application. However, in a UserControl I've created (Ironically, the screen that allows the user to customize their font settings), I want the font size and fontfamily to remain static. No matter what I try, my global font settings override what I set in my user control: <UserControl x:Class="ctlUserSettings" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:R2D2" Height="400" Width="600"> <Grid> <Grid.Resources> <Style x:Key="tbxStyle" TargetType="TextBox"> <Style.Setters> <Setter Property="FontSize" Value="14"/> <Setter Property="FontFamily" Value="Tahoma"/> </Style.Setters> </Style> ... etc... <StackPanel Margin="139,122.943,41,0" Orientation="Horizontal" Height="33" VerticalAlignment="Top"> <TextBox Style="{x:Null}" FontSize="13" FontFamily="Tahoma" HorizontalAlignment="Left" MaxWidth="500" MinWidth="350" Name="txtReaderPath" Height="Auto" VerticalAlignment="Top" /> <TextBox Style="{x:tbxStyle}" Margin="15,0,0,0" HorizontalAlignment="Left" Name="txtPath" Width="43" Height="23" VerticalAlignment="Top">(some text)</Button> </StackPanel> I've tried setting Style to {x:Null}, setting custom font sizes inline, and setting a style in the resources of this control. None take precedence over the styles in my resource dictionary. As you can see, I show a sprinkling of all the things I've tried in the XAML sample above... What am I missing?

    Read the article

  • How to get jQuery draggable elements scroll with mb.imageNavigator

    - by bulltorious
    I am using jQuery mb.imageNavigator (1.8) from http://pupunzi.open-lab.com/mb-jquery-components/mb-imagenavigator/ to implements a Risk type game adjucation system. Using the imageNavigator plugin I am able to scroll around a large game map of the world. My issue is when I declare some elements as draggable and drag them onto the map image, their location does not stay relative to where in the picture I put them. They just stay fixed on the screen no matter where I scroll. Does anyone know how to make the the draggable elements scroll with the image? Matteo posts about "you can add an additional content layer that overlay image and moves with it" and posts an example, but I can't make it work. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> ` <head> <script type="text/jscript" src="lib/jquery/jquery-1.3.2.js"> </script> <script type="text/jscript" src="lib/jquery/jquery-ui-1.7.2.custom.min.js"> </script> <script type="text/jscript" src="lib/utilities/mbImgNav.min.js_0.js"> </script> <script type="text/jscript" src="lib/utilities/start.js"> </script> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>New Web Project</title> </head> <body> <div id="AdamsAshTray" style="float:right; background-color:red; z-index:999"> test test test </div> <div id="navArea"> <div imageUrl="someimage" navPosition="BR" navWidth="100" style="display:none;" class="imagesContainer"> <span class="title">zuccheriera</span> <div class="description"> <STRONG>description1</STRONG> </div> </div> </div> </body> $(document).ready(function(){ $("#navArea").imageNavigator({ areaWidth:1820, areaHeight:1000, draggerStyle: "1px dotted red", navOpacity: .8 }) $("#AdamsAshTray").draggable({ grid: [20,20] }); })`

    Read the article

  • Where to put a glossary of important terms and patterns in documentation?

    - by Tetha
    Greetings. I want to document certain patterns in the code in order to build up a consistent terminology (in order to easen communication about the software). I am, however, unsure, where to define the terms given. In order to get on the same level, an example: I have a code generator. This code generator receives a certain InputStructure from the Parser (yes, the name InputStructure might be less than ideal). This InputStructure is then transformed into various subsequent datastructures (like an abstract description of the validation process). Each of these datastructures can be either transformed into another value of the same datastructure or it can be transformed into the next datastructure. This should sound like Pipes and Filters to some degree. Given this, I call an operation which takes a datastructures and constructs a value of the same datastructure a transformation, while I call an operation which takes a datastructure and produces a different follow-up datastructure a derivation. The final step of deriving a string containing code is called emitting. (So, overall, the codegenerator takes the input-structure and transforms, transforms, derives, transforms, derives and finally emits). I think emphasizing these terms will be benefitial in communications, because then it is easy to talk about things. If you hear "transformation", you know "Ok, I only need to think about these two datastructures", if you hear "emitting", you know "Ok, I only need to know this datastructure and the target language.". However, where do I document these patterns? The current code base uses visitors and offers classes called like ValidatorTransformationBase<ResultType> (or InputStructureTransformationBase<ResultType>, and so one and so on). I do not really want to add the definition of such terms to the interfaces, because in that case, I'd have to repeat myself on each and every interface, which clearly violates DRY. I am considering to emphasize the distinction between Transformations and Derivations by adding further interfaces (I would have to think about a better name for the TransformationBase-classes, but then I could do thinks like ValidatorTransformation extends ValidatorTransformationBase<Validator>, or ValidatorDerivationFromInputStructure extends InputStructureTransformation<Validator>). I also think I should add a custom page to the doxygen documentation already existing, as in "Glossary" or "Architecture Principles", which contains such principles. The only disadvantage of this would be that a contributor will need to find this page in order to actually learn about this. Am I missing possibilities or am I judging something wrong here in your opinion? -- Regards, Tetha

    Read the article

  • Adding Icons next to items in Navigation Drawer

    - by DunriteJW
    I have been trying to figure this out for quite some time right now. I've looked all over this site and many others, and can't find anything that works. I simply want icons next to each item in my navigation drawer. I am currently using the method that Google's navigation drawer sample app uses. in the MainActivity.java I have the following: mColorTitles = getResources().getStringArray(R.array.colors_array); mDrawerLayout = (DrawerLayout) findViewById(R.id.drawer_layout); mDrawerList = (ListView) findViewById(R.id.left_drawer); mColorIcons = getResources().getStringArray(R.array.color_icons); adapter = new ArrayAdapter<String>(this, R.layout.drawer_list_item, mColorTitles); // set a custom shadow that overlays the main content when the drawer opens mDrawerLayout.setDrawerShadow(R.drawable.drawer_shadow, GravityCompat.START); // set up the drawer's list view with items and click listener mDrawerList.setAdapter(adapter); mDrawerList.setOnItemClickListener(new DrawerItemClickListener()); my drawer_list_item.xml: <TextView xmlns:android="http://schemas.android.com/apk/res/android" android:id="@android:id/text1" android:layout_width="match_parent" android:layout_height="match_parent" android:textAppearance="?android:attr/textAppearanceListItemSmall" android:gravity="center_vertical" android:paddingLeft="5dp" android:paddingRight="16dp" android:textColor="#000" android:background="?android:attr/activatedBackgroundIndicator" android:minHeight="?android:attr/listPreferredItemHeightSmall"/> it currently just makes the navigation drawer display the color titles from the array. I have the icons that I want in another array, and they follow the exact same order as I want them associated with the colors. I just have no idea how to even begin inserting the icons from that array into the navigation items if it helps, here's what my arrays look like in my strings.xml (not full code) <string-array name="colors_array"> <item>Home</item> <item>Cherry</item> <item>Crimson</item> ... <array name="color_icons"> <item>@drawable/homeicon</item> <item>@drawable/cherryicon</item> <item>@drawable/crimsonicon</item> ... I've tried putting a drawable in the drawer_list_item, which works, but (of course) it always puts the same one in there. I could not think of a way to change it according to the color. I am relatively new to android programming, so if I am missing something simple, I'm sorry. If you could help me out, I would greatly appreciate it, as this is basically the last thing I need to do before I publish my application to the Play Store. Thanks in advance!

    Read the article

  • Ejabberd clustering problem with amazon EC2 server

    - by user353362
    Hello Guys! I have been trying to install ejabberd server on Amazons EC2 instance. I am kinds a stuck at this step right now. I am following this guide: http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes... From the guide I have sucessfully completed the Set up First Node (on ejabberd1) part. But am stuck in part 4 of Set up Second Node (on ejabberd2) So all in all, I created the main node and am able to run the server on that node and access its admin console from then internet. In the second node I have installed ejabberd. But I am stuck at point 4 of setting up the node instruction presented in this blog (http://tdewolf.blogspot.com/2009/07/clustering-ejabberd-nodes-using-mnes...). I execute this command " erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia " on the second server and get a crashing error: root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,52,25}},"Protocol: ~p: register error: ~p~n",["inet_tcp",{{badmatch,{error,duplicate_name}},[{inet_tcp_dist,listen,1},{net_kernel,start_protos,4},{net_kernel,start_protos,3},{net_kernel,init_node,2},{net_kernel,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.21.0},{registered_name,net_kernel},{error_info,{exit,{error,badarg},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{net_kernel,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[#Port<0.52,<0.18.0]},{dictionary,[{longnames,false}]},{trap_exit,true},{status,running},{heap_size,610},{stack_size,23},{reductions,518}],[]]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{'EXIT',nodistribution}},{offender,[{pid,undefined},{name,net_kernel},{mfa,{net_kernel,start_link,[['ejabberd@domU-12-31-39-0F-7D-14',shortnames]]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,52,25}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,52,25}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,52,25}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# any idea what going on? I am not really sure how to solve this problem :S how to let ejabberd only access register from one special server? › Is that the right way of copying .erlang.cookie file? Submitted by privateson on Sat, 2010-05-29 00:11. before this I was getting this error (see below), I solved it by running this command: chmod 400 .erlang.cookie Also to copy the cookie I simply created a file using vi on the second server and copied the secret code from server one to the second server. Is that the right way of copying .erlang.cookie file? ERROR ~~~~~~~~~~ root@domU-12-31-39-0F-7D-14:/etc/ejabberd# erl -sname ejabberd@domU-12-31-39-0F-7D-14 -mnesia dir '"/var/lib/ejabberd/"' -mnesia extra_db_nodes "['ejabberd@domU-12-31-39-02-C8-36']" -s mnesia {error_logger,{{2010,5,28},{23,28,56}},"Cookie file /root/.erlang.cookie must be accessible by owner only",[]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.20.0},{registered_name,auth},{error_info,{exit,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]},[{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{initial_call,{auth,init,['Argument__1']}},{ancestors,[net_sup,kernel_sup,<0.8.0]},{messages,[]},{links,[<0.18.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,987},{stack_size,23},{reductions,439}],[]]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,net_sup}},{errorContext,start_error},{reason,{"Cookie file /root/.erlang.cookie must be accessible by owner only",[{auth,init_cookie,0},{auth,init,1},{gen_server,init_it,6},{proc_lib,init_p_do_apply,3}]}},{offender,[{pid,undefined},{name,auth},{mfa,{auth,start_link,[]}},{restart_type,permanent},{shutdown,2000},{child_type,worker}]}]} {error_logger,{{2010,5,28},{23,28,56}},supervisor_report,[{supervisor,{local,kernel_sup}},{errorContext,start_error},{reason,shutdown},{offender,[{pid,undefined},{name,net_sup},{mfa,{erl_distribution,start_link,[]}},{restart_type,permanent},{shutdown,infinity},{child_type,supervisor}]}]} {error_logger,{{2010,5,28},{23,28,56}},crash_report,[[{pid,<0.7.0},{registered_name,[]},{error_info,{exit,{shutdown,{kernel,start,[normal,[]]}},[{application_master,init,4},{proc_lib,init_p_do_apply,3}]}},{initial_call,{application_master,init,['Argument_1','Argument_2','Argument_3','Argument_4']}},{ancestors,[<0.6.0]},{messages,[{'EXIT',<0.8.0,normal}]},{links,[<0.6.0,<0.5.0]},{dictionary,[]},{trap_exit,true},{status,running},{heap_size,233},{stack_size,23},{reductions,123}],[]]} {error_logger,{{2010,5,28},{23,28,56}},std_info,[{application,kernel},{exited,{shutdown,{kernel,start,[normal,[]]}}},{type,permanent}]} {"Kernel pid terminated",application_controller,"{application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}"} Crash dump was written to: erl_crash.dump Kernel pid terminated (application_controller) ({application_start_failure,kernel,{shutdown,{kernel,start,[normal,[]]}}}) root@domU-12-31-39-0F-7D-14:/var/lib/ejabberd# cat /var/log/ejabberd/ejabberd.log =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:154) : pubsub init "localhost" [{access_createnode, pubsub_createnode}, {plugins, ["default","pep"]}] =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:210) : ** tree plugin is nodetree_default =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init default plugin =INFO REPORT==== 2010-05-28 22:48:53 === I(<0.321.0:mod_pubsub:214) : ** init pep plugin =ERROR REPORT==== 2010-05-28 23:40:08 === ** Connection attempt from disallowed node 'ejabberdctl1275090008486951000@domU-12-31-39-0F-7D-14' ** =ERROR REPORT==== 2010-05-28 23:41:10 === ** Connection attempt from disallowed node 'ejabberdctl1275090070163253000@domU-12-31-39-0F-7D-14' **

    Read the article

  • Asp.net hosting equivalent of Dreamhost (pricing, features and support)

    - by Cherian
    Disclaimer: I have browsed http://stackoverflow.com/questions/tagged/asp.net+hosting and didn’t find anything quite similar in value to Dreamhost. One of the biggest impediments IMHO for developing web applications on asp.net is the cost of deployment. I am not talking about building sites like Stackoverflow.com or plentyoffish.com. This is about sites that are bigger than brochureware and smaller than ones that require dedicated servers. Let me give you an example. xmec.org is an asp.net site I maintain for my college alumni. On an average it’s slated to hit around 1000-1100 views per day. At present it’s hosted on godaddy. The service is so damn pathetic; I am using it only because of the lack of options. The site doesn’t scale (no, it’s not the code) and the web control panels are extremely slow. The money I pay doesn’t justify the service or the performance. Every deployment push is a visit to the infuriating web control panel to set the permissions and the root directories. Had I developed it in python, this would have been deployed on Dreamhost.com with $10/year hosting fees (they have offers running all throughout) 50 GB space 5 MySQL Databases Shell / FTP Users POP / SMTP Access Unlimited Domains hosting Unlimited Sub domains hosting Unlimited Domains Forwarded/Mirrored Custom DNS (These are the only ones I could think of. More at the feature page) With a dream host shell, I even have a svn checked-out version of wordpress for my blog. Now, that’s control! To my question: Is there any asp.net (preferably .net 3.5. Dreamhost keeps on updating versions every fortnight) hosting company providing remotely similar feature-sets and pricing like Dreamhost. My requirements are: Less than $15-25/ year Typical WISP minus PHP .net 3.5 SP1 Full Trust mode(I can live with medium trust, if not for the IL emitting libraries) Isolated Application Pool 5 – 10 MySQL db’s Unlimited domain hosting MsSql 2005 or 2008 FTP support At Least 5 GB space SMTP IIS 7 Log files Accessibility Moderately good control panel Scripting, shell support Nominal bandwidth Another case in point: Recently I’ve been contemplating building a tool-website to find duplicates and weird characters in my Google contacts and fix them. With asp.net, the best part is that I can do this with LINQ to XML in less than 100 lines of code. What’s bad is the hosting part. I don’t think I stand to make any money out of this and therefore can’t afford to host it on GoGrid or DiscountAsp.net. Godaddy is not an option either. If I do this in python, I can push to this my existing $10 Dreamhost account with another domain pointed. No extra cost. Svn exported with scripts (capability) to change the connection string! Looking at the problem holistically, I think I represent a large breed of programmers playing it cheap and experimenting different things on a regular basis, one of which will become the next twitter/digg.

    Read the article

  • gem install of mongrel

    - by atlantis
    I initiated myself into rails development yesterday. I installed ruby 1.9.1 , rubygems and rails. Running 'gem install mongrel' worked fine and ostensibly installed mongrel too. I am slightly puzzled because script/server starts webrick by default 'which mongrel' returns nothing 'locate mongrel' returns lots of entries like . /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1 /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel . . . /usr/local/bin/mongrel_rails /usr/local/lib/ruby/gems/1.9.1/cache/mongrel-1.1.5.gem /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/evented_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/doc/actionpack-2.3.2/rdoc/files/lib/action_controller/vendor/rack-1_0/rack/handler/swiftiplied_mongrel_rb.html /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/evented_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/actionpack-2.3.2/lib/action_controller/vendor/rack-1.0/rack/handler/swiftiplied_mongrel.rb /usr/local/lib/ruby/gems/1.9.1/gems/mongrel-1.1.5 . . . Does look like I have mongrel installed (both the default installation and my custom install). So why doesn't which mongrel return something. Also trying to reinstall mongrel using 'gem install mongrel' returns throws its own set of exceptions : Building native extensions. This could take a while... ERROR: Error installing mongrel: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb install mongrel checking for main() in -lc... yes creating Makefile make gcc -I. -I/usr/local/include/ruby-1.9.1/i386-darwin9.7.0 -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -D_XOPEN_SOURCE -D_DARWIN_C_SOURCE -fno-common -D_XOPEN_SOURCE=1 -O2 -g -Wall -Wno-parentheses -fno-common -pipe -fno-common -o http11.o -c http11.c http11.c: In function 'http_field': http11.c:77: error: 'struct RString' has no member named 'ptr' http11.c:77: error: 'struct RString' has no member named 'len' http11.c:77: warning: left-hand operand of comma expression has no effect http11.c:77: warning: statement with no effect http11.c: In function 'header_done': http11.c:172: error: 'struct RString' has no member named 'ptr' http11.c:174: error: 'struct RString' has no member named 'ptr' http11.c:176: error: 'struct RString' has no member named 'ptr' http11.c:177: error: 'struct RString' has no member named 'len' http11.c: In function 'HttpParser_execute': http11.c:298: error: 'struct RString' has no member named 'ptr' http11.c:299: error: 'struct RString' has no member named 'len' make: *** [http11.o] Error 1

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >