Search Results

Search found 8550 results on 342 pages for 'datetime operation'.

Page 186/342 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • python urllib post question

    - by paul
    hello ALL im making some simple python post script but it not working well. there is 2 part to have to login. first login is using 'http://mybuddy.buddybuddy.co.kr/userinfo/UserInfo.asp' this one. and second login is using 'http://user.buddybuddy.co.kr/usercheck/UserCheckPWExec.asp' i can login first login page, but i couldn't login second page website. and return some error 'illegal access' such like . i heard this is related with some cooke but i don't know how to implement to resolve this problem. if anyone can help me much appreciated!! Thanks! import re,sys,os,mechanize,urllib,time import datetime,socket params = urllib.urlencode({'ID':'ph896011', 'PWD':'pk1089' }) rq = mechanize.Request("http://mybuddy.buddybuddy.co.kr/userinfo/UserInfo.asp", params) rs = mechanize.urlopen(rq) data = rs.read() logged_fail = r';history.back();</script>' in data if not logged_fail: print 'login success' try: params = urllib.urlencode({'PASSWORD':'pk1089'}) rq = mechanize.Request("http://user.buddybuddy.co.kr/usercheck/UserCheckPWExec.asp", params ) rs = mechanize.urlopen(rq) data = rs.read() print data except: print 'error'

    Read the article

  • Windows XP computer can't see Windows 7 shares

    - by Alex Brault
    I am building a network containing notably a laptop running XP and a computer running Windows 7. Both computer have shared folders and the 7 has a shared printer, to which another laptop running 7 is able to print. If I attempt to see the laptop's network shares on the PC, everything works perfectly: I am able to see and enter the folders. The reverse operation however doesn't work. Xp doesn't see the Windows 7 PC. Other things to note: As mentioned above, another Windows 7 computer is able to see the printer and I can ping both computers from either PC. Both computers are in the same workgroup named ALLAITEMENT Password-protected shares are turned off on the PC. The 7 Computer uses 40/56 bit encryption The Windows XP laptop has SP3

    Read the article

  • Can I install visual studio 2012 side by side with 2010?

    - by AMgdy
    Can I install Microsoft Visual Studio Ultimate 2012 RTM side by side with Visual Studio Ultimate 2010 on Windows 7? Because I tried to install it and I just got the splash screen for the installer then I got the following error: Setup detected an issue during the operation. Please click below to check for a solution and help us improve the setup experience. And nothing happens! no solution found! although I've setup the same copy into Windows 8 successfully. Any Ideas?

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • Getting ZFS per dataset IO statistics (or NFS per export IO statistics)

    - by jkj
    Where do I find statistics about how IO is divided between zfs datasets? (zpool iostat only tells me how much IO a pool is experiencing.) All the relevant datasets are used through NFS, so I'd be happy with per export NFS IO statistics also. We're currently running OpenIndiana [edit] It seems that operation and byte counter are available in kstat kstat -p unix:*:vopstats_??????? ... unix:0:vopstats_2d90002:nputpage 50 unix:0:vopstats_2d90002:nread 12390785 ... unix:0:vopstats_2d90002:read_bytes 22272845340 unix:0:vopstats_2d90002:readdir_bytes 477996168 ... ...but the strange hexadecimal ID numbers have to be resolved from /etc/mnttab (better ideas?) rpool/export/home/jkj /export/home/jkj zfs rw,...,dev=2d90002 1308471917 Now writing a munin plugin to use the data...

    Read the article

  • Creating and Indexing Email Database using SqlCe

    - by Anindya Chatterjee
    I am creating a simple email client program. I am using MS SqlCe as a storage of emails. The database schema for storing the message is as follows: StorageId int IDENTITY NOT NULL PRIMARY KEY, FolderName nvarchar(255) NOT NULL, MessageId nvarchar(3999) NOT NULL, MessageDate datetime NOT NULL, StorageData ntext NULL In the StorageData field I am going to store the MIME message as byte array. But the problem arises when I am going to implement search on the stored messages. I have no idea how I am going to index the messages on top of this schema. Can anyone please help me in suggesting a good but simple schema, so that it will be effective in terms of storage space and search friendliness as well? Regards, Anindya Chatterjee

    Read the article

  • Disable .htaccess from apache allowoverride none, still reads .htaccess files

    - by John Magnolia
    I have moved all of our .htaccess config into <Directory> blocks and set AllowOverride None in the default and default-ssl. Although after restarting apache it is still reading the .htaccess files. How can I completely turn off reading these files? Update of all files with "AllowOverride" /etc/apache2/mods-available/userdir.conf <IfModule mod_userdir.c> UserDir public_html UserDir disabled root <Directory /home/*/public_html> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule> /etc/apache2/mods-available/alias.conf <IfModule alias_module> # # Aliases: Add here as many aliases as you need (with no limit). The format is # Alias fakename realname # # Note that if you include a trailing / on fakename then the server will # require it to be present in the URL. So "/icons" isn't aliased in this # example, only "/icons/". If the fakename is slash-terminated, then the # realname must also be slash terminated, and if the fakename omits the # trailing slash, the realname must also omit it. # # We include the /icons/ alias for FancyIndexed directory listings. If # you do not use FancyIndexing, you may comment this out. # Alias /icons/ "/usr/share/apache2/icons/" <Directory "/usr/share/apache2/icons"> Options Indexes MultiViews AllowOverride None Order allow,deny Allow from all </Directory> </IfModule> /etc/apache2/httpd.conf # # Directives to allow use of AWStats as a CGI # Alias /awstatsclasses "/usr/share/doc/awstats/examples/wwwroot/classes/" Alias /awstatscss "/usr/share/doc/awstats/examples/wwwroot/css/" Alias /awstatsicons "/usr/share/doc/awstats/examples/wwwroot/icon/" ScriptAlias /awstats/ "/usr/share/doc/awstats/examples/wwwroot/cgi-bin/" # # This is to permit URL access to scripts/files in AWStats directory. # <Directory "/usr/share/doc/awstats/examples/wwwroot"> Options None AllowOverride None Order allow,deny Allow from all </Directory> Alias /awstats-icon/ /usr/share/awstats/icon/ <Directory /usr/share/awstats/icon> Options None AllowOverride None Order allow,deny Allow from all </Directory> /etc/apache2/sites-available/default-ssl <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown </VirtualHost> </IfModule> /etc/apache2/sites-available/default <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /delboy /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> # Restrict phpmyadmin access Order Deny,Allow Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> /etc/apache2/conf.d/security # # Disable access to the entire file system except for the directories that # are explicitly allowed later. # # This currently breaks the configurations that come with some web application # Debian packages. # #<Directory /> # AllowOverride None # Order Deny,Allow # Deny from all #</Directory> # Changing the following options will not really affect the security of the # server, but might make attacks slightly more difficult in some cases. # # ServerTokens # This directive configures what you return as the Server HTTP response # Header. The default is 'Full' which sends information about the OS-Type # and compiled in modules. # Set to one of: Full | OS | Minimal | Minor | Major | Prod # where Full conveys the most information, and Prod the least. # #ServerTokens Minimal ServerTokens OS #ServerTokens Full # # Optionally add a line containing the server version and virtual host # name to server-generated pages (internal error documents, FTP directory # listings, mod_status and mod_info output etc., but not CGI generated # documents or custom error documents). # Set to "EMail" to also include a mailto: link to the ServerAdmin. # Set to one of: On | Off | EMail # #ServerSignature Off ServerSignature On # # Allow TRACE method # # Set to "extended" to also reflect the request body (only for testing and # diagnostic purposes). # # Set to one of: On | Off | extended # TraceEnable Off #TraceEnable On /etc/apache2/apache2.conf # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "foo.log" # with ServerRoot set to "/etc/apache2" will be interpreted by the # server as "/etc/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs/2.2/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # LockFile ${APACHE_LOCK_DIR}/accept.lock # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 4 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 500 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadLimit: ThreadsPerChild can be changed to this maximum value during a # graceful restart. ThreadLimit can only be changed by stopping # and starting Apache. # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include mods-enabled/*.load Include mods-enabled/*.conf # Include all the user configurations: Include httpd.conf # Include ports listing Include ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include conf.d/ # Include the virtual host configurations: Include sites-enabled/

    Read the article

  • Linux's best filesystem to work with 10000's of files without overloading the system I/O

    - by mhambra
    Hi all. It is known that certain AMD64 Linuxes are subject of being unresponsive under heavy disk I/O (see Gentoo forums: AMD64 system slow/unresponsive during disk access (Part 2)), unfortunately have such one. I want to put /var/tmp/portage and /usr/portage trees to a separate partition, but what FS to choose for it? Requirements: * for journaling, performance is preffered over safe data read/write operations * optimized to read/write 10000 of small files Candidates: * ext2 without any journaling * BtrFS In Phoronix tests, BtrFS had demonstrated a good random access performance (fat better than XFS thereby it may be less CPU-aggressive). However, unpacking operation seems to be faster with XFS there, but it was tested that unpacking kernel tree to XFS makes my system to react slower for 51% disregard of any renice'd processes and/or schedulers. Why no ReiserFS? Google'd this (q: reiserfs ext2 cpu): 1 Apr 2006 ... Surprisingly, the ReiserFS and the XFS used significantly more CPU to remove file tree (86% and 65%) when other FS used about 15% (Ext3 and ... Is it same now?

    Read the article

  • RadControl DateTimePicker Selecting new time doesn't remove highlight from previous selection

    - by Jason Beck
    This is not browser specific - the behavior exists in Firefox and IE. The RadControl is being used within a User Control in a SiteFinity site. Very little customization has been done to the control. <telerik:RadDateTimePicker ID="RadDateTimePicker1" runat="server" MinDate="2010/1/1" Width="250px"> <ClientEvents></ClientEvents> <TimeView starttime="08:00:00" endtime="20:00:00" interval="02:00:00"></TimeView> <DateInput runat="server" ID="DateInput"></DateInput> </telerik:RadDateTimePicker> protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { RadDateTimePicker1.MinDate = DateTime.Now; } }

    Read the article

  • Keeps "SSH timeout" error in our AWS instance- how do i diagnose?

    - by ming yeow
    I am befuddled by this error. We keep failing to SSH into our AWS instance, whether it is is deployment or via console. I have tried rebooting a few times, but it does not seem to be helping. Here are a couple of error messages i keep getting. connection failed for: HOST.NAME.amazonaws.com (Errno::ETIMEDOUT: Operation timed out - connect(2)) 111.222.333.444: ssh connection failed at 2010-07-02 03:39:37 I also SSHed in when it was up, and monitored "top" when ssh times out. looking at the memory logs, it does not look like any program was hogging

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • NHibernate: How to save list in the database?

    - by Anry
    I have a table Order, Transaction, Payment. Class Order has the properties: public virtual Guid Id { get; set; } public virtual DateTime Created { get; set; } ... I added properties: public virtual IList<Transaction> Transactions { get; set; } public virtual IList<Payment> Payments { get; set; } These properties contain a record of tables [Transaction] and [Payment]. How to keep these lists in the database?

    Read the article

  • Using LINQ Group By to return new XElements

    - by Jon
    I have the following code and got myself confused: I have a query that returns a set of records that have been identified as duplicates and I then want to create a XElement for each one. This should be done in one query I think but I'm now lost. var f = (from x in MyDocument.Descendants("RECORD") where itemsThatWasDuplicated.Contains((int)x.Element("DOCUMENTID")) group x by x.Element("DOCUMENTID").Value into g let item = g.Skip(1) //Ignore first as that is the valid one select item ); var errorQuery = (from x in f let sequenceNumber = x.Element("DOCUMENTID").Value let detail = "Sequence number " + sequenceNumber + " was read more than once" select new XElement("ERROR", new XElement("DATETIME", time), new XElement("DETAIL", detail), new XAttribute("TYPE", "DUP"), new XElement("ID", x.Element("ID").Value) ) );

    Read the article

  • would unexpected power cuts harm the Linux OS?

    - by Johan Elmander
    I am developing an application on a Linux embedded board (runs Debian) e.g. Raspberry Pi, Beagle Board/Bone, or olimex. The boards works on an environment that the electricity is cut unexpectedly (it is far complicated to place PSU, etc.) and it would happen every day couple times. I wonder if the unexpected power cuts would cause crash/problem on the Linux Operation System? If it is something that I should worry, what would you suggest to prevent the damages on OS against the unexpected power cuts? PS. The application needs to writes some data to the storage medium (SD card), I think it would not be suitable to mount it as read-only.

    Read the article

  • ASP.NET MVC & Detached Entity won't save

    - by Justin
    Hey all, I have an ASP.NET MVC POST action for saving an entity on submit of a form. It works fine for insert but doesn't work for update, the database doesn't get called, so it's clearly not tracking the changes, as it's "detached". I'm using Entity Framework w/.NET 4: //POST: /Developers/Save/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult Save(Developer developer) { developer.UpdateDate = DateTime.Now; if (developer.DeveloperID == 0) {//inserting new developer. DataContext.DeveloperData.Insert(developer); } //save changes - TODO: doesn't update... DataContext.SaveChanges(); //redirect to developer list. return RedirectToAction("Index"); } Any ideas would be greatly appreciated, thanks, Justin

    Read the article

  • What causes winsock 10055 errors? How should I troubleshoot?

    - by Tom Kerr
    I'm investigating some issues with winsock 10055 errors on a chain of custom applications (some of which we control, some not) and was hoping to get some advice on techniques to troubleshoot the problem. No buffer space available. An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full. From research, non-paged pool and ports seem to be the only resources which can cause this error. Is there another resource which might cause 10055 errors? Currently, we have perfmon counters setup on the applications and non-paged pool usage looks low in most circumstances. Open TCP connections looks low and I am unaware of another way to monitor ports. Since it only happens in production, we are unable to use more invasive counters. Is there some other tool or procedure you would recommend to diagnose which application is causing the issue?

    Read the article

  • I can't run uwsgi as normal user

    - by atomAltera
    I want to run uwsgi server as www user, but if I write: uwsgi --socket $SOCKET --chmod-socket 666 --pidfile $PIDFILE --daemonize $LOGFILE --chdir $CHDIR --pp $PYTHONPATH --module main --post-buffering 8192 --workers 1 --threads 10 --uid www --gid www A socket creation error occurs: Log: 1 *** Starting uWSGI 1.4.1 (64bit) on [Mon Dec 10 22:15:23 2012] *** 2 compiled with version: 4.4.5 on 17 November 2012 23:31:14 3 os: Linux-2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 4 nodename: autoblog 5 machine: x86_64 6 clock source: unix 7 pcre jit disabled 8 detected number of CPU cores: 2 9 current working directory: / 10 writing pidfile to /tmp/uwsgi_mysite.pid 11 detected binary path: /usr/local/bin/uwsgi 12 setgid() to 1002 13 set additional group 1004 (files) 14 setuid() to 1002 15 *** WARNING: you are running uWSGI without its master process manager *** 16 your memory page size is 4096 bytes 17 detected max file descriptor number: 1024 18 lock engine: pthread robust mutexes 19 unlink(): Operation not permitted [core/socket.c line 109] 20 bind(): Address already in use [core/socket.c line 141]

    Read the article

  • cmd.exe version comparison?

    - by Paul
    When using batch files or console applications on Windows servers the window in question can allow text to be hightlighted (marked) for copying and pasting. Doing this pauses the batch/application and it will only resume after the copy operation. Or this is what I thought to be true. Recently on a Windows 2003 R2 SP2 server I noted that whilst the scrolling was paused the operations were not. Does anyone know if my description in the 1st para is true for older windows is not true for Windows 2003 R2 SP2 when it changed a full version comparison table for cmd.exe across different OS' ? Thanks for reading (Windows 2000 tag as that was the OS I used most before 2003 R2)

    Read the article

  • rsAccessDenied - SQL server 2008 reporting services

    - by rboorgapally
    Hi, I am running SQL server 2008 developer edition on windows vista home premium. I created a reporting services project that was built successfully in BIDS. When I try to deploy it it gives the following error: Error rsAccessDenied : The permissions granted to user 'COMP\MYSELF' are insufficient for performing this operation. The MYSELF account is the only account on the system. It has administrator rights. The reporting service is running with the LocalSystem service account. If I log in with the MYSELF account into reportmanager, I cannot see the site settings tab. Without the site settings tab, how do I add or change the roles for MYSELF account. In summary, please help me to open the reportmanager in the browser with the site settings link so that I can change the role of the user account.

    Read the article

  • Removing connections to network printers that no longer exist

    - by Jeff Hardy
    I'm trying to delete some network printer connections from a Windows Server 2003 (SP2) machine because the printer shares no longer exist. Windows shows the printers' status as "Printer not found on server, unable to connect"; this is expected; the printers are now on a different server. Only one machine ever connected to the printer shares; it no longer needs to and I'd like to clean it up. However, when I try to delete the connections, I get an error message: --------------------------- Remove Printer --------------------------- Printer connection cannot be removed. Operation could not be completed. --------------------------- OK --------------------------- The "solutions" I've found online seem to be more voodoo than anything (and they still don't work!). Does anybody know how I can delete these long-gone printers?

    Read the article

  • CalDAV : request error problem

    - by Louis W
    We have an xserve which hosts our internal calendars through CalDAV. Personally have 4 calendars from the server set to show in my iCal. Frequently throughout the day at random moments I will get the below error. If I hit ignore, it will just continue to work without any issues. This is more of an annoyance then anything (the dock icon jumping all the time). I have full permission to read anything, so it is not an actual permission issue. Anyone know what might be the issue? Request Error Access to account "FOO" is not permitted. The server has responded: "HTTP/1.1 403 Forbidden" to operation CalDAVAccountRefreshQueueableOperation

    Read the article

  • Permission denied when copying on a fileshare in Finder, but copying via command line works

    - by smokris
    I'm trying to copy files on a SMB fileshare. When I attempt to copy the files in Finder, I get the following error: The operation can’t be completed because you don’t have permission to access some of the items. Copying via Terminal.app (using a simple cp command) works just fine. Permissions on the folders (as seen from the computer attached to the fileshare) are as follows: Source: dr-xr-x--- 2 smokris staff 16384 Oct 13 10:55 . dr-xr-x---@ 61 smokris staff 16384 Oct 13 10:56 .. -r--r----- 1 smokris staff 53970 Oct 13 10:55 ._IMG_3823.JPG -r--r-----@ 1 smokris staff 3135600 Oct 13 10:55 IMG_3823.JPG Destination: drwxrwx--- 2 smokris staff 16384 Apr 9 10:17 . drwxrwx--- 3 smokris staff 16384 Apr 9 10:15 .. Any ideas?

    Read the article

  • Error installing nginx with passenger-install-nginx-module on ubuntu 11.10 & rails 3.1.0

    - by user938363
    Here is the error message from installing nginx with passenger-install- nginx-module (rvmsudo). The nginx is 1.0.6 installed under /opt/nginx (default). gem install passenger successfully prior. Someone has idea about the problem? thanks. /usr/bin/ld: /home/dtt/.rvm/gems/ruby-1.9.2-p290/gems/passenger-3.0.9/ ext/nginx/../common/libpassenger_common.a(aggregate.o): undefined reference to symbol 'round@@GLIBC_2.2.5' /usr/bin/ld: note: 'round@@GLIBC_2.2.5' is defined in DSO /usr/lib/gcc/ x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/libm.so so try adding it to the linker command line /usr/lib/gcc/x86_64-linux-gnu/4.6.1/../../../x86_64-linux-gnu/libm.so: could not read symbols: Invalid operation collect2: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/tmp/root-passenger-2135/nginx-1.0.6' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong

    Read the article

  • Install DC again after removing on exchange server

    - by Kawharu
    I had a DC and Exchange 2010 installed on the same machine. I removed the DC role, and Exchange server went crazy. I tried to install the DC role again to fix the problem but ran into this error when running DCPROMO: Active Directory Domain Services could not create the NTDS Settings object for this Active Directory Domain Controller CN=NTDS Settings,CN=DC1,CN=Servers,CN=Manukau,CN=Sites,CN=Configuration,DC=AccessGroupnz,DC=com on the remote AD DC Server1.AccessGroupnz.com. Ensure the provided network credentials have sufficient permissions. "The DSA operation is unable to proceed because of a DNS lookup failure." Do you think I need to run this in an elevated command prompt, or change credentials somewhere to domain admin? Or is it something else?

    Read the article

  • Can't change (enable/disable) features on Windows 7

    - by Cristy
    I can't make any changes to Windows Features. Even if I just enter in Windows Features and press Ok I get: "An error has occured. Not all of the features were successfully changed." I noticed this while trying to install/uninstall IIS. If I try to install IIS from the Web Platform I get: [Windows package manager] The referenced assembly could not be found. Operation failed with 0x80073701 Any suggestions? EDIT: I have tried - Startup Repair. It says it has found problems but is unable to solve them - System restore (but didn't had an early enough restore point) :(

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >