Search Results

Search found 68155 results on 2727 pages for 'data security'.

Page 830/2727 | < Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >

  • How does iperf calculate throughput and jitter?

    - by Someone
    I've read that iperf basically tries to send as much information down a connection as quickly as possible reporting on the throughput achieved. This tool is especially useful in determining the volume of data that links between two machines can supply. is it possible to gather the same results by sending regular data, as in not testing data? what I'm trying to do is this; sending data in the foreground while in the back ground gather statistics (throughput and jitter). so can anyone tell me how iperf calculates these two values ?

    Read the article

  • Error installing a .NET Windows service with InstallUtil

    - by norlando
    I keep getting the error below when every I try to use the InstallUtil to install my .NET service. I put "installutil myservice.exe" into command prompt and then get the error. Any idea of what the problem is? Do I need to add another parameter? An exception occurred during the Install phase. System.Security.SecurityException: The source was not found, but some or all event logs could not be searched. Inaccessible logs: Security.

    Read the article

  • How to achieve the following RTO & RPO with logshipping only using SQL Server?

    - by Jimmy Chandra
    Trying to come up with viable backup restore & logshipping solution for achieving the following: 15 minutes Recovery Point Objective (no more than 15 minutes data loss at any time) 5 minutes Recovery Time Objective (must be able to get the db up and running back by 5 minutes) Considering using logshipping only (which I think is kind of pushing it, but I want to know if anyone else know how to achieve this). Some other info for consideration: Using 40 Gbit / sec fiber channel between the primary and disaster recovery (DRC) sites The sites are about 600 km apart. At close of business, the amount of data generated is predicted to be about 150 MB/sec. Log backup is planned for every 5 min. Doing some rough calculation I came up w/ the following numbers: 40 Gbit / sec = 5 MB / sec @ 100% network efficiency. 5 MB / sec = 300 MB / min. @ 300 MB / min, the total amount of data that can be transfer considering the 5min RTO is about 1.5GB, but that will left no time for the actual backup and restore, so if we cut it down to 3min logshipping time, which equals to ~900 MB over 3 minutes at 100% network efficiency, that will left about 1 min backup time and 1 minute restore time. Currently don't have any information if the system being used is capable of restoring 900 MB in 1 min, but assume it can. for COB scenario... 150 MB/sec, and considering the 3 min logshipping time, which should equal to about 27 GB of data over 3 mins...??? I think this is where the SLA will break... since there is no way to transfer 27 GB of data over a 40Gbit/sec line in 3 min. Can I get someone else opinion? I am thinking database mirroring might be a better answer for this...

    Read the article

  • open-sshd service withou pam support !! How can I add pam support to sshd? Ubuntu

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password

    Read the article

  • Can I have 2Gbit over 1Gbit Nics

    - by Daniel
    So this really baffles me. Apparently because 1Gbit can transmit data in both directions simultaneously it should be possible to get 2Gbit of data transfer on a single NIC (1Gbit flow seend and 1Gbit receive). People claim that because 1Gbit is full-duplex (almost always) it is exactly 2Gbit in total. My intuition and electrical background tells me that something is not right here 4 twisted pairs 250Mbit capacity each gives 1Gbit. Unless it is really possible to transfer data in both directions simultaneously. I did a test with iperf. Ubuntu server 12.04 <-- MacBook Pro. Both with decent CPU speed. Tested speed of connection individually and on Mac I can see 112MB/s regardless which direction data is going. On Ubuntu with vnstat and ifstat I got 970Mbit speeds. Now, launching iperf in server mode on both machines at the same time and sending data using 2 iperf clients shows that I'm for example on Ubuntu box sending at 600Mbit, and receiving 350Mbit. which adds up to pretty much 1Gbit link. So to me there is no magical 2Gbit. Can someone confirm that or tell why I'm wrong? Another thing that confuses me i the fact that e.g. 24-port switch has for example: Throughput»up»to:»50.6Mpps Switching»capacity:»68Gbps Switch»fabric»speed:»88Gbps Which would suggest thay can handle 2GBit per port.

    Read the article

  • Apache and file permissions

    - by Matthew
    I'm running LAMP on Ubuntu 8.04. Apache's username and group are www-data. I put my connection details and AES key in a file in a directory that's not web served. I chown-ed the files to www-data:www-data and set the permissions to 700. Still, the script that require()s these files will only run if I chmod the files to 755. What am I missing?

    Read the article

  • How can I get the path to a Windows service executable WITHOUT using sc qc?

    - by Jared
    I need to query a windows service for the path to it's executable via the command prompt. I think the way I would do this is:sc qc myServiceName, but when I do that, I get the following error: [SC] QueryServiceConfig FAILED 122: The data area passed to a system call is too small. [SC] GetServiceConfig needs 1094 bytes I think this means that the sc command is sending a data structure to some other library that is too small for the data that needs to be returned. Instead of SC nicely retrying with a larger data structure (1094 bytes) it bombs out and gives me this ugly error message. Thanks Micro$oft. So is there a way to work around this error? I just need the path to the executable, but will parse it out of some other text if needed.

    Read the article

  • Windows 2008 R2: can't extend C drive, mystery partitions

    - by wfaulk
    I have a Windows 2008 R2 server running under VMware ESX 4.0.0. I have reallocated disk space to it in order to extend the C drive, but Disk Management has "Extend Volume" greyed out. DISKPART shows more partitions than Disk Management shows, including one after the volume I'm trying to extend, which would explain why Disk Management isn't allowing the extension. Disk Management shows: System Reserved / 100MB NTFS / Healthy (System) (C:) / 39.39 GB NTFS / Healthy (Boot, Page File, Crash Dump) 10.00 GB / Unallocated DISKPART shows: Partition 1 Dynamic Data 992 KB 31 KB Partition 2 Dynamic Data 100 MB 1024 KB Partition 3 Dynamic Data 39 GB 101 MB Partition 4 Dynamic Data 1024 KB 39 GB My question at this point is: what the heck are partitions 1 and 4, where did they come from, why doesn't Disk Management show them, and, most importantly, can I delete partition 4 in order to extend partition 3?

    Read the article

  • Why is a FLAC encoded from a decoded MP3 bigger than the MP3?

    - by Ryan Thompson
    To be more precise than in the title, suppose I have a MP3 file that is 320 kbps. If I decompress it, then logically, all the data except for roughly 320 kilobits out of each second of audio should be redundant data, able to be compressed away. So, when I encode the decompressed file to FLAC, or any other lossless codec, why is it so much larger? On a related note, is it theoretically possible to losslessly recover the source mp3 audio from a decompressed wav? (I know the mp3 itself is lossy. I'm asking if it's possible to re-encode without any further loss.) EDIT: Let me clarify the related question, and the rationale behind it. Suppose I have a wav that was decompressed from an MP3 file (and assume I don't have the mp3 itself for some reason). If I don't want to lose any more quality, I can re-encode it with FLAC or any other lossless encoder and get a larger file just to maintain the same quality. Or, I can re-encode it to mp3 again and get the same size as the original but lose more data. Obviously, neither of these cases is ideal. I can either have the original size or the original quality, but not both (I mean the quality of the original mp3, not the original lossless source). My question is: Can we get both? Is it theoretically possible to recover the lossy compressed data from the lossy decompressed data, without losing even more? If it is possible, I could imagine a lossless compression algorithm that compresses the audio with FLAC. Then it also scans the audio for any signs of previous lossy compression, and if detected, recompresses it losslessly to the original lossy file. Then it keeps whichever file is smaller.

    Read the article

  • Adding a single 300Gb SCSI drive to poweredge 2850

    - by John Steele
    I have a 2850 setup with 3 146Gb drives, two partitions 1 12GB system with server 2003 sp2 and 1 261Gb Data. I am strapped on disk space on the data partition having to push data around. I wanted to add a 300Gb single drive for lesser critical data, is this possible? Or is it best to add 2 300Gb drives for another RAID 1 configuration? This is my church network and while it is mission critical it is not enterprise so I can take it down for a few hours. Any pointers to documentation or direct help would be greatly appreciated. John

    Read the article

  • IIS7 FTP Setup - An error occured during the authentication process. 530 End Login failed

    - by robmzd
    I'm having a problem very similar to IIS 7.5 FTP IIS Manager Users Login Fail (530) on Windows Server 2008 R2 Standard. I have created an FTP site and IIS Manager user but am having trouble logging in. I could really do with getting this working with the IIS Manager user rather than by creating a new system user since I'm fairly restricted with those accounts. Here is the output when connecting locally through command prompt: C:\Windows\system32>ftp localhost Connected to MYSERVER. 220 Microsoft FTP Service User (MYSERVER:(none)): MyFtpLogin 331 Password required for MyFtpLogin. Password: *** 530-User cannot log in. Win32 error: Logon failure: unknown user name or bad password. Error details: An error occured during the authentication process. 530 End Login failed. I have followed the guide to configure ftp with iis manager authentication in iis 7 and Adding FTP Publishing to a Web Site in IIS 7 Things I have done and checked: The FTP Service is installed (along with FTP Extensibility). Local Service and Network Service have been given access to the site folder Permission has been given to the config files Granted read/write permissions to the FTP Root folder The Management Service is installed and running Enable remote connections is ticked with 'Windows credentials or IIS manager credentials' selected The IIS Manager User has been added to the server (root connection in the IIS connections branch) The new FTP site has been added IIS Manager Authentication has been added to the FTP authentication providers The IIS Manager user has been added to the IIS Manager Permissions list for the site Added Read/Write permissions for the user in the FTP Authorization Rules Here's a section of the applicationHost config file associated with the FTP site <site name="MySite" id="8"> <application path="/" applicationPool="MyAppPool"> <virtualDirectory path="/" physicalPath="D:\Websites\MySite" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:www.mydomain.co.uk" /> <binding protocol="ftp" bindingInformation="*:21:www.mydomain.co.uk" /> </bindings> <ftpServer> <security> <ssl controlChannelPolicy="SslAllow" dataChannelPolicy="SslAllow" /> <authentication> <basicAuthentication enabled="true" /> <customAuthentication> <providers> <add name="IisManagerAuth" enabled="true" /> </providers> </customAuthentication> </authentication> </security> </ftpServer> </site> ... <location path="MySite"> <system.ftpServer> <security> <authorization> <add accessType="Allow" users="MyFtpLogin" permissions="Read, Write" /> </authorization> </security> </system.ftpServer> </location> If I connect to the Site (not FTP) from my local IIS Manager using the same IIS Manager account details then it connects fine, I can browse files and change settings as I would locally (though I don't seem to have an option to upload files). Trying to connect via FTP though either through the browser or FileZilla etc... gives me: Status: Resolving address of www.mydomain.co.uk Status: Connecting to 123.456.12.123:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER MyFtpLogin Response: 331 Password required for MyFtpLogin. Command: PASS ********* Response: 530 User cannot log in. Error: Critical error Error: Could not connect to server I have tried collecting etw traces for ftp sessions, in the logs I get a FailBasicLogon followed by a FailCustomLogon, but no other info: FailBasicLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E StartCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | LogonProvider=IisManagerAuth StartCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | provider=IisManagerAuth EndCallProvider SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} EndCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} FailCustomLogon SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ErrorCode=0x8007052E FailFtpCommand SessionId={cad26a97-225d-45ba-ab1f-f6acd9046e55} | ReturnValue=0x8007052E | SubStatus=ERROR_DURING_AUTHENTICATION In the normal FTP logs I just get: 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelOpened - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 USER MyFtpLogin 331 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 PASS *** 530 1326 41 e2d4e935-fb31-4f2c-af79-78d75d47c18e - 2012-10-23 16:13:11 123.456.12.123 - 123.456.12.123 21 ControlChannelClosed - - 0 0 e2d4e935-fb31-4f2c-af79-78d75d47c18e - If anyone has any ideas than I would be very grateful to hear them. Many thanks.

    Read the article

  • How to make TimeMachine back up contents of any path or mounted volume

    - by Olfan
    I keep different types of data in different encrypted sparsebundle images (say, one for each client) which automatically mount upon login but can't be opened by anybody other than myself. So, after login I have a number of virtual volumes in /Volumes/ which keeps my client data both secure and organized. How do I include data inside these virtual Volumes in TimeMachine's backups, or data residing in any path on any partition/volume? I found a promising solution description at blog.eurocomp.info involving editing the com.apple.TimeMachine.plist but all I can get TimeMachine to do is backing up the sparsebundle files themselves. I want it to back up the files inside the mounted image, though - something like adding /Volumes/Client_abc/ to TimeMachine's search path. Please do not redirect my to this previous question as it doesn't solve the problem at all. Please also refrain from telling me why you think I should not want this answer as that will not solve anything either. Please lastly don't say "it can't be done" unless you can technically prove that claim.

    Read the article

  • Remote Desktop doesn't recognize username change

    - by Unsigned
    There are two active user accounts on the Windows 7 Professional server, Owner, and Guest. Owner is an Administrator with a password. Guest is the default Guest account with no password, but has been added to Remote Desktop Users. When attempting to connect to the server via a Windows 7 Professional client, Guest accepts RD connections fine, however, Owner throws an error "Unable to connect to Local Security Authority." I created a new Administrator account, named Remote, with the same password as Owner. Remote Desktop worked perfectly. I then deleted Owner, and renamed Remote to Owner. Now, Remote Desktop gives the same error ("Unable to connect to Local Security Authority") when attempting to log into the new Owner. However, attempting to log into Remote (even though it was renamed to Owner), works. Completely at a loss here, what is going on? Why won't Owner work, and why does Remote Desktop still use the old name on the renamed account?

    Read the article

  • Alternatives to Remote Storage Service under Windows Server 2008 R2

    - by ObligatoryMoniker
    I am working on setting up a new Windows Server 2008 R2 file server for our organization and felt like the functionality offered by the Remote Storage Service in previous versions of Windows would meet our needs for segmenting our data so that we can have different backup schedules for different tiers of data based on the frequency of that data being used and updated. What software exists that provide this same or similar functionality for Server 2008 R2?

    Read the article

  • How important is it to install on the program files folder?

    - by eran
    In a proper installation of an average software, its executables would be in the program files folder; its user data in the user's application data folder; it's non user specific data in the all users application data folder; and it should usually be able to run under non-administrative privileges. These guidelines could easily be ignored on XP, but they are an issue on Vista and 7 due to UAC. We're on the verge of releasing a major version of our software. It's a CMS, used by our clients as their main work tool, and their IT staff are well familiar with it. If we want to be fully compatible with Windows 7, we have to make quite a few changes, and we're already on a tight schedule. Question is: we can easily have our clients install our software outside of program files, or have them run it as administrators. I think it's wrong, but I need some ammunition: why should we install on program files, with all the limitations that come with it?

    Read the article

  • IIS7 Custom ASP.NET Errors

    - by Nathan
    I'm trying to setup a custom error page for the IIS 7 404.13 (Content length too large) error. Here's the relevant sections of my web.config file: <system.webServer> <httpErrors errorMode="Custom" existingResponse="Replace"> <remove statusCode="404" subStatusCode="13" /> <error statusCode="404" subStatusCode="13" prefixLanguageFilePath="" path="/FileUpload/Test.aspx" responseMode="ExecuteURL" /> </httpErrors> <security> <requestFiltering> <requestLimits maxAllowedContentLength="10240" /> </requestFiltering> </security> </system.webServer> The response that is being sent back to the server is blank. The Test.aspx file is not blank. Any idea what's going on here?

    Read the article

  • conditional formatting for subsequent rows or columns

    - by Trailokya Saikia
    I have data in a range of cells (say six columns and one hundred rows). The first four column contains data and the sixth column has a limiting value. For data in every row the limiting value is different. I have one hundred such rows. I am successfully using Conditional formatting (e.g. cells containing data less than limiting value in first five columns are made red) for 1st row. But how to copy this conditional formatting so that it is applicable for entire hundred rows with respective limiting values. I tried with format painter. But it retains the same source cell (here limiting value) for the purpose of conditional formatting in second and subsequent rows. So, now I am required to use conditional formatting for each row separately s

    Read the article

  • PHP Connection Strings

    - by Campo
    I have setup mirroring on my MSSQL server it is an automatic fail over. Lets say the SQL server goes down. I have found connection strings to reconnect the site to the mirror database for MSSQL 2008 Data Source=myServerAddress;Failover Partner=myMirrorServerAddress;Initial Catalog=myDataBase;Integrated Security=True; OR Provider=SQLNCLI10;Data Source=myServerAddress;Failover Partner=myMirrorServerAddress;Initial Catalog=myDataBase;Integrated Security=True; OR Driver={SQL Server Native Client 10.0};Server=myServerAddress;Failover_Partner=myMirrorServerAddress;Database=myDataBase; Trusted_Connection=yes; Is there something similar I can use for PHP to do the same sort of thing. This way if only the database goes down the site instantly fails over to the mirror database as soon as it is online. Thoughts/Suggestions/Comments All appreciated. I checked connectionstring.com but did not find a section for PHP

    Read the article

  • Pivot Table Does Not Refresh Source

    - by AME
    Typically, when selecting a data source for a pivot table in Excel 2013, it is possible to refresh the table by selecting "refresh table" or "refresh all". This triggers an update in the Pivot Table based on changes in the underlying data source. However, I am running into a case where this functionality does not prompt a refresh of the pivot table. What might be causing the pivot table in Excel 2013 to remain static when selecting "refresh data"?

    Read the article

  • Complete Active Directory redesign and GPO application

    - by Wolfgang Kuehne
    after much testing and hundreds of tries and hours invested I decided to consult you experts here. Overview: I want to apply some GPO to our users which will add some specific site to the Trusted Sites in Internet Explorer settings for all users. However, the more I try the more confusing the results become. The GPO is either applied to one group of users, or to another one. Finally, I came to the conclusion that this weird behavior is cause rather by the poor organization in Users and Groups in Active Directory. As such I want to kick the problem from the root: Redesign the Active Directory Users and Groups. Scenario: There is one Domain Controller, and we use Terminal Services (so there is a Terminal Server as well). Users usually log on to the Terminal Server using Remote Desktop to perform their daily tasks. I would classify the users in the following way: IT: Admins, Software Development Business: Administration, Management The current structure of the Active Directory Users and Groups is a result of the previous IT management. The company has used Small Business Server which has created multiple default user groups and containers. Unfortunately, the guys working before me have do no documentation at all. Now, as I inherit this structure I am in the no mans land. No idea which direction to head first. As you can see, the Active Directory User and Groups have become a bit confusing. There is no SBS anymore, but when migrating from SBS to the current Windows Server 2008 R2 environment the guys before me have simply copied the same structure. The real question: Where should I start cleaning from, ensuring that I won't break totally the current infrastructure? What is a nice organization for the scenario that I have explained above? Possible useful info for the current structure: Computers folder contains Terminal Services Computers user group Members: TerminalServer computer located at Server -> Terminalserver OU Member of: NONE Foreign Security Principals : EMPTY Managed Service Accounts : EMPTY Microsoft Exchange Security Groups : not sure if needed, our emails are administered by external service provider Distribution Groups : not sure if needed Security Groups : there are couple of groups which are needed SBS users : contains all the users Terminalserver : contains only the TerminalServer machine

    Read the article

  • jboss 4: enable UsersRolesLoginModule, where must users.properties files be placed?

    - by golemwashere
    I have an application (CQ5) that requires enabling unauthenticatedIdentity on jbossdir/conf/login-config.xml I used: <authentication> <login-module code = "org.jboss.security.auth.spi.UsersRolesLoginModule" flag = "required" > <module-option name="unauthenticatedIdentity">nobody</module-option> </login-module> </authentication> then I tried to copy jbossdir/conf/props/jmx-console-users.properties,jmx-console-roles.properties into users.properties and roles.properies (same dir). I still get this error: ERROR [org.jboss.security.auth.spi.UsersRolesLoginModule] Failed to load users/passwords/role files java.io.IOException: No properties file: users.properties or defaults: defaultUsers.properties found where should I put those files?

    Read the article

  • Unable to use Gmail in Thunderbird 3

    - by Jatin Ganhotra
    Mozilla Thunderbird v.3.1.7 I am trying to setup Gmail, but none of the settings are working. I have tried every resource: Blogs, tutorials Instructions by Google Instructions by Thunderbird Questions here But, still its not working. My settings are as follows Server Settings Server Type: IMAP Mail server Server Name: imap.gmail.com Username: [email protected] Port: 993 Default: 993 Connection Security: SSL/TLS Authentication method: Encrypted password Outgoing server (SMTP) Server Name: smtp.gmail.com Port: 587 Default: 25 Connection Security: STARTTLS Authentication method: Encrypted password Username: [email protected] IMAP is enabled in my Gmail settings. ERROR: Connection to the server [email protected] timed out. I am behind a proxy server and I have configured those settings under: Thunderbird Preferences - Advanced - Network and Disk Space - Connection Settings - Manual Proxy Configuration The proxy configuration works, as when I created a Blogs and News feeds a/c, it was working properly and fetching the feeds for me. So, Thunderbird is configured properly as per the proxy settings. Help me.

    Read the article

  • MySQL doesn't talk to PHP anymore (EasyPHP)

    - by Matt Ellen
    I've just upgraded from Windows XP to Windows 7 (64 bit) I was using EasyPHP 5.3.1 to develop my website, but since I've upgraded I can't get PHP to talk to MySQL. Even the PHPMyAdmin page doesn't load. I've tried installing the latest 64bit version of MySQL in place of the supplied version of MySQL, but that hasn't helped. The queries just don't seem to reach MySQL. I have verified that the DB for my database works by running mysql on the command line. PHPMyAdmin doesn't display an error, just a blank page. The error coming up from my website is: Warning: PDO::__construct() [pdo.--construct]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in E:\services\EasyPHP-5.3.1\www\IdeaWeb\classes\Security.inc on line 14 Fatal error: Maximum execution time of 60 seconds exceeded in E:\services\EasyPHP-5.3.1\www\IdeaWeb\classes\Security.inc on line 0 Does anyone know how to solve this? (i.e. get MySQL talking to PHP.)

    Read the article

  • .htaccess RewriteRule Problem

    - by Kunal Gautam
    Before asking question let me tell you some assumptions here there are 5 files on my webserver index.php config.php read.php write.php .htaccess I've wrote following URL rewriting rule in .htaccess RewriteEngine on RewriteRule ^(\w+)$ read.php?id=$1 Now when I type domain.com/xyz it fetch data from read.php?id=xyz thats nice :) But when I type domain.com/index it fetch data from index.php or when i type domain.com/write or domain.com/config or domain.com/read it fetch data from write.php , config.php and read.php respectively I want data to be fectched from read.php?id=index or read.php?id=config or read.php?id=read or read.php?id=write Any one can help me regarding this ? Sorry for my poor english

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

< Previous Page | 826 827 828 829 830 831 832 833 834 835 836 837  | Next Page >