Search Results

Search found 17950 results on 718 pages for 'directory listing'.

Page 496/718 | < Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >

  • IIS 7.0 informational HTTP status codes

    - by Samir R. Bhogayta
    1xx - Informational These HTTP status codes indicate a provisional response. The client computer receives one or more 1xx responses before the client computer receives a regular response. IIS 7.0 uses the following informational HTTP status codes: 100 - Continue. 101 - Switching protocols. 2xx - Success These HTTP status codes indicate that the server successfully accepted the request. IIS 7.0 uses the following success HTTP status codes: 200 - OK. The client request has succeeded. 201 - Created. 202 - Accepted. 203 - Nonauthoritative information. 204 - No content. 205 - Reset content. 206 - Partial content. 3xx - Redirection These HTTP status codes indicate that the client browser must take more action to fulfill the request. For example, the client browser may have to request a different page on the server. Or, the client browser may have to repeat the request by using a proxy server. IIS 7.0 uses the following redirection HTTP status codes: 301 - Moved permanently. 302 - Object moved. 304 - Not modified. 307 - Temporary redirect. 4xx - Client error These HTTP status codes indicate that an error occurred and that the client browser appears to be at fault. For example, the client browser may have requested a page that does not exist. Or, the client browser may not have provided valid authentication information. IIS 7.0 uses the following client error HTTP status codes: 400 - Bad request. The request could not be understood by the server due to malformed syntax. The client should not repeat the request without modifications. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 400 error: 400.1 - Invalid Destination Header. 400.2 - Invalid Depth Header. 400.3 - Invalid If Header. 400.4 - Invalid Overwrite Header. 400.5 - Invalid Translate Header. 400.6 - Invalid Request Body. 400.7 - Invalid Content Length. 400.8 - Invalid Timeout. 400.9 - Invalid Lock Token. 401 - Access denied. IIS 7.0 defines several HTTP status codes that indicate a more specific cause of a 401 error. The following specific HTTP status codes are displayed in the client browser but are not displayed in the IIS log: 401.1 - Logon failed. 401.2 - Logon failed due to server configuration. 401.3 - Unauthorized due to ACL on resource. 401.4 - Authorization failed by filter. 401.5 - Authorization failed by ISAPI/CGI application. 403 - Forbidden. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 403 error: 403.1 - Execute access forbidden. 403.2 - Read access forbidden. 403.3 - Write access forbidden. 403.4 - SSL required. 403.5 - SSL 128 required. 403.6 - IP address rejected. 403.7 - Client certificate required. 403.8 - Site access denied. 403.9 - Forbidden: Too many clients are trying to connect to the Web server. 403.10 - Forbidden: Web server is configured to deny Execute access. 403.11 - Forbidden: Password has been changed. 403.12 - Mapper denied access. 403.13 - Client certificate revoked. 403.14 - Directory listing denied. 403.15 - Forbidden: Client access licenses have exceeded limits on the Web server. 403.16 - Client certificate is untrusted or invalid. 403.17 - Client certificate has expired or is not yet valid. 403.18 - Cannot execute requested URL in the current application pool. 403.19 - Cannot execute CGI applications for the client in this application pool. 403.20 - Forbidden: Passport logon failed. 403.21 - Forbidden: Source access denied. 403.22 - Forbidden: Infinite depth is denied. 404 - Not found. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 404 error: 404.0 - Not found. 404.1 - Site Not Found. 404.2 - ISAPI or CGI restriction. 404.3 - MIME type restriction. 404.4 - No handler configured. 404.5 - Denied by request filtering configuration. 404.6 - Verb denied. 404.7 - File extension denied. 404.8 - Hidden namespace. 404.9 - File attribute hidden. 404.10 - Request header too long. 404.11 - Request contains double escape sequence. 404.12 - Request contains high-bit characters. 404.13 - Content length too large. 404.14 - Request URL too long. 404.15 - Query string too long. 404.16 - DAV request sent to the static file handler. 404.17 - Dynamic content mapped to the static file handler via a wildcard MIME mapping. 404.18 - Querystring sequence denied. 404.19 - Denied by filtering rule. 405 - Method Not Allowed. 406 - Client browser does not accept the MIME type of the requested page. 408 - Request timed out. 412 - Precondition failed. 5xx - Server error These HTTP status codes indicate that the server cannot complete the request because the server encounters an error. IIS 7.0 uses the following server error HTTP status codes: 500 - Internal server error. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 500 error: 500.0 - Module or ISAPI error occurred. 500.11 - Application is shutting down on the Web server. 500.12 - Application is busy restarting on the Web server. 500.13 - Web server is too busy. 500.15 - Direct requests for Global.asax are not allowed. 500.19 - Configuration data is invalid. 500.21 - Module not recognized. 500.22 - An ASP.NET httpModules configuration does not apply in Managed Pipeline mode. 500.23 - An ASP.NET httpHandlers configuration does not apply in Managed Pipeline mode. 500.24 - An ASP.NET impersonation configuration does not apply in Managed Pipeline mode. 500.50 - A rewrite error occurred during RQ_BEGIN_REQUEST notification handling. A configuration or inbound rule execution error occurred. Note Here is where the distributed rules configuration is read for both inbound and outbound rules. 500.51 - A rewrite error occurred during GL_PRE_BEGIN_REQUEST notification handling. A global configuration or global rule execution error occurred. Note Here is where the global rules configuration is read. 500.52 - A rewrite error occurred during RQ_SEND_RESPONSE notification handling. An outbound rule execution occurred. 500.53 - A rewrite error occurred during RQ_RELEASE_REQUEST_STATE notification handling. An outbound rule execution error occurred. The rule is configured to be executed before the output user cache gets updated. 500.100 - Internal ASP error. 501 - Header values specify a configuration that is not implemented. 502 - Web server received an invalid response while acting as a gateway or proxy. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 502 error: 502.1 - CGI application timeout. 502.2 - Bad gateway. 503 - Service unavailable. IIS 7.0 defines the following HTTP status codes that indicate a more specific cause of a 503 error: 503.0 - Application pool unavailable. 503.2 - Concurrent request limit exceeded.

    Read the article

  • Why does Windows spooler require an administrator account?

    - by Software Monkey
    Does anyone know what changes I might need to make to allow restricted users to print using a printer configured for spooling? My Windows XP SP3 system currently requires me to use an Admin account for printing if the printer is configured to spool documents before printing. If the printer is configured for direct printing it works for all accounts. This used to work and some months back it just stopped, and I can't pin down why. The printer, which is an HP PSC 1200 (an old printer) itself is configured for Everyone to have Print authority and my specific (restricted) account to have Full authority, that is Print, Manage Printers and Manager Documents. My HDD is locked down for restricted users given them only read authority to the entire file system except their data directories, which is how I have run my systems for years. I assume there may be a directory somewhere that I need to allow users to write to.

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • Our Server Rooted but exploit doesnt work?

    - by Salina Odelva
    Hi everyone. My friend's hosting server got rooted and we have traced some of attacker's commands.. We've found some exploits under /tmp/.idc directory.. We've disconnected the server and are now testing some local kernel exploits that the attacker tried on our server. Here is our kernel version: 2.4.21-4.ELsmp #1 SMP We think that he got root access by the modified uselib() local root exploit but the exploit doesn't work! loki@danaria {/tmp}# ./mail -l ./lib [+] SLAB cleanup child 1 VMAs 32768 The exploit hangs like this.. I've waited over 5 minutes but nothing has happened. I've also tried other exploits but they didn't work.. Any ideas? or experimentations with this exploit? Because we need to find the issue and patch our kernel but we can't understand how he used this exploit to get root... Thanks

    Read the article

  • Installing GNU scientific library and linking to programme

    - by jack
    I am trying to install a statistical program which requires GNU Scientific Library (GSL). I have successfully installed GSL through the yum command, but my statistical program gives an error when I try to run make install. I think there is a linking problem. How can I solve it? $ sudo yum install gsl.x86_64 Installed: gsl.x86_64 0:1.15-3.fc16 Dependency Installed: atlas.x86_64 0:3.8.4-1.fc16 $ tar -xvzf prog.tgz $ cd prog $ make $ gcc -O3 -Wall -Wshadow -pedantic -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -DVER32 -I/opt/local/include/ -L/opt/local/lib/ -c -o prog.o prog.c In file included from prog.c:16:0: prog.h:7:30: fatal error: gsl/gsl_sf_gamma.h: No such file or directory compilation terminated. make: *** [prog.o] Error 1

    Read the article

  • Will adding top level directories with similar structure to existing directories change the SEO of my site?

    - by Russell Sims
    I've been pointed this way for SEO related questions and this one has had me pondering for a little while now. I'm recreating a site's structure. The website's content is generated through several feeds and unless I want to place each and every - of the 10,000 odd - venues into their own category manually, I can't avoid categorising each item by using its address. The current the structure looks like this Homepage > region > county > city/town > venue page and the URL looks like domain/region/county/city/venue/ I'm relatively happy to use this structure as it's not too convoluted. However we also promote deals and we also group the venues into their respective franchise, so that leads to URLs such as: domain/groups AND domain/deals My question is: how would the directory structure look with these new additions? Would I have a URL that looks like domain/deals/region/county/city/venue or domain/group/region/county/city/venue and just put a 301 or a canonical link tag on the page to prevent the duplicate pages competing with each other? Am I just worrying about it needlessly and perhaps link straight from domain/deals to the venue page URL domain/region/county/city/venue, this bothers me a bit though as the deals and groups will not be in the breadcrumbs.

    Read the article

  • PowerShell fomat.ps1xml not reachable

    - by blsub6
    I'm trying to load Exchange Management Shell and it gives me a big 'ol red error that says: Import-Module : There were errors in loading the format data file: Microsoft.PowerShell, , %APPDATA%\Roaming\Microsoft\Exchange\RemotePowerShell\DOMAINNAME.format.ps1xml : File skipped because of the following validation exception: File %APPDATA%\Roaming\Microsoft\Exchange\RemotePowerShell\DOMAINNAME.format.ps1xml cannot be loaded. The file %APPDATA%\Roaming\Microsoft\ExchangeRemotePowerShell\DOMAINNAME\DOMAINNAME.format.ps1xml is not digitally signed. The script will not execute on the system. Please see "get-help about_signing" for more details... The %APPDATA% is stored on an external server on my network (that I can ping to without problems). I am missing a ton of PS cmdlets too, which I'm presuming are stored in '*.format.ps1xml' I tried finding the directory in which format.ps1xml is supposed to reside on the external server and it's not even created. Can someone tell me where to start?

    Read the article

  • unique .htaccess question about mod_rewrite and RewriteCond

    - by Stephen
    I have a few rewrite rules like these RewriteRule ^dir/(.*)-something.html otherdir/file1.php?var1=val&var2=$1 [L,NC] RewriteRule ^dir/ otherdir/file2.php?var1=val [L,NC] dir/ is not a real directory. Everything above works as expected. However, the user is able to type anything like mysite.com/dir/asdfasdfasdfsdf And they are still redirected to file2.php. If the user types in just garbage I'd like to serve a 404 instead. I'm guessing I need a RewiteCond that will test for blank space after the slash and only then serve file2.php, but I'm unsure how to write it.

    Read the article

  • Cannot set monitor to native resolution

    - by S B
    problem is similar to so many other users, but solutions found do not work. Background: Fresh install of 12.04 (completely updated) on a Fit-PC2 (specs). Read in several places that the new 3.X kernel that 12.04 runs on has a new psb_gfx driver which supports the gma500 graphics card (poulsbo chipset). All's pretty much working (there are some glitches which are documented, so I won't raise them here), except for the screen resolution. My native monitor resolution is 1920X1080, but all I get is 1024x768. Output running xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0* Although I read that Ubuntu does not come with an xorg.conf file anymore, I also tried running sudo X :1 -configure, and here's the end of the output: Number of created screens does not match number of detected devices. Configuration failed. When I look in the xorg.conf.new file created in my home directory, it seems that for some reason X thinks I have two screens. Don't know what to do with that. Ideas anyone? Thanks for your time.

    Read the article

  • Perform shell operation through secure shell

    - by Ben
    Is it possible to perform a shell operation from a bash script through a secure shell. Here is an example of why you may want to do this. Lets say you have a simple unix operating system that you need only build and run on, but you want to do all of the development on another machine. I want to write a bash script that has the following functionality: scp file to location on other machine ssh to other machine cd into correct directory make run program scp results to file on original computer exit ssh Is this remotely possible? (Pardon the Pun :p)

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • chroot'ing SSH home directories, shell problem.

    - by Hamza
    Hi folks, I am trying to chroot my SSH users to their home directories and it seems to work.. in a strange way. Here is what I have in my sshd_config: Match group restricthome ChrootDirectory %h The permissions on the user directories looks like this: drwxr-xr-x 2 root root 1024 May 11 13:45 [user]/ And I can see that the user logs in successfully: May 11 13:49:23 box sshd[5695]: Accepted password for [user] from x.x.x.x port 2358 ssh2 (with no error messages after this) But after entering the password the PuTTY window closes down. This is a wild guess, but could it be because the user's shell is set to /bin/bash and it can't execute because of the chroot? If so, could you give me pointers on how to fix it? Would simply copying the bash binary into user's home directory and modyfying the shell work? How would I deal with the dependencies, ldd shows quite a few of those :) Comments/suggestions will be appreciated. Thanks.

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • reiserfsck --rebuild-tree failed: Not enough allocable blocks

    - by mojo
    I have a reiserfs volume that required a --rebuild-tree, but is currently failing to complete when I pass it --rebuild-tree. Here is the output that I receive when running it: reiserfsck 3.6.19 (2003 www.namesys.com) # reiserfsck --rebuild-tree started at Mon Oct 26 13:22:16 2009 # Pass 0: # Pass 0 The whole partition (7864320 blocks) is to be scanned Skipping 8450 blocks (super block, journal, bitmaps) 7855870 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 9408 /sec 287884 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 7855870 Leaves among those 6105606 Objectids found 287892 Pass 1 (will try to insert 6105606 leaves): # Pass 1 Looking for allocable blocks .. finished 0%....20%....40%....60%....80%....Not enough allocable blocks, checking bitmap...there are 1 allocable blocks, btw out of disk space Aborted I can't mount it, and I can't fsck it. I've tried extending the volume, but that hasn't helped either.

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Windows XP Does Not Follow CNAME Shares

    - by user49349
    I am supporting a mix of Windows XP Pro and Windows 7 desktops in my Active Directory network, and I am having an odd issue with XP and CNAME records. Say I have a record in my DNS for a server with an A name of something like STORAGE.company.local and give it a CNAME of NAS.company.local. I can go onto an XP and 7 computer, and ping NAS and it will automatically resolve to STORAGE.company.local. If I am on Windows 7 and go to run and enter \\STORAGE or \\NAS, it will go to that server in Explorer. If I do the same in XP, STORAGE will work but NAS will not. It just times out Is there some setting buried in XP to make this work properly?

    Read the article

  • persistant data in tor browser bundle?

    - by Snesticle
    What sort of persistent data is generated by bundled Tor? I recently did an experiment using the Tor Browser Bundle for GNU-Linux. I created two directories, A and B, and placed an identical copy of Tor in each one. Next I placed a simple python script in directory A that both launched the vidalia package and, when exiting the network, deleted the entire contents of A with the exception of itself and rebuilt the bundle from the original archive. What surprises me is that after about ten hours of browsing each, A and B now show a distinct difference in startup time. Also curious is that I get a message in the log of B that never shows up in A: new control connection open which is a notice level advisory. This has nothing to do with what I was originally testing but now I'm interested in what exactly is going on. By the way I do not have to rely on Tor for my personal safety as many are forced to do so even if you just have a hunch I'd be interested in hearing it.

    Read the article

  • Minidump folder cannot be found

    - by Saxtus
    Although I have enabled the creation of minidump files in my system, it appears that either Windows doesn't create them where the Startup and recovery dialog points to (%SystemRoot%\Minidump) or at least I can't find them. Even the Minidump folder under Windows directory is missing and I had numerous BSODs till now. I've searched all my HDs for mini*.dmp files only to find some old ones in a backup folder from my Vista 32-bit installation, before I install Windows 7 64-bit. Any thoughts of why this is happening and how to fix it?

    Read the article

  • new web site on windows 2008 server with IIS7 - does not work

    - by user22817
    Hi guys, I have a new domain: www.biografica.ro which was bought 3 months ago but never used still then. I've bought a server with Windows 2008 server instaWeb Server (IIS). I've added a new site in C:\inetpu\wwwroot directory and did the setting (assigned the default ip to www.biografica.ro host etc -i've did on IIS6 one year ago, so i think i know to set up it correctly)... The problem is that the default site created by IIS instalation is working, but mine is not. It is started but is says: This link appears to be broken in Chrome and "The webpage cannot be found" (in IE). Do you know guys what i;ve done wrong? As i know a domain takes time to propagate but i think locally it should work.. Please help...i've spent 3 hours and cannot find a way...:(

    Read the article

  • Netdom Join Failed To Complete Successfully

    - by BubbleMonster
    I have two servers on VMWare. One is a standard install of Server 2008 r2 and the other is Server 2008 core. Server 2008's IP is: 192.168.186.135 and computer name is: SERVER01. This is a domain controller and has DNS installed along with Active Directory. The domain is called: contoso.com Server 2008 Core's IP is: 192.168.186.137 and computer name is: SERVER02 They can both ping each other, but when I try running the following: netdom join SERVER02 /domain:contoso.com Results: The specified domain either does not exist or could not be contacted. The command failed to complete successfully. I'm thinking it might be a DNS issue? I'm new to servers and I am teaching myself. Any help would be appreciated. Thanks in advance.

    Read the article

  • database on SSD: data only or the DBM program too?

    - by simone
    I plan on moving the data I use for statistical analysis (100-ish Gb) onto an SSD. The data is either sqlite single-file db's, or postgresql-managed data. The SSD is 240 Gb, 550 MB/s read and 520 MB/s write. Should I reserve that space for the data only, or would it be a good idea to install the operating system (Mac OS X) and the application directory (Adobe Suite, Microsoft Office and the like) on the SSD too? And would it make a substantial speed difference whether I also install the postgresql binaries on the SSD? I have plenty of other space (another 300Gb hard-drive, and a 1Tb one). Don't know the features of the non-SSD drives, though they're our standard equipment on all Macs, and they're definitely OK. Thanks.

    Read the article

  • Google Drive have a folder/file in both private and public

    - by Sander
    I've my documents in Google Drive/Docs neatly organised in structured folder system. For not cluttering email inboxes purposes I now would like to share the content of 1 single private folder by putting it in the public directory. If possible I'd like to do this without duplicating the content. Is it possible to organise files in Google Drive to that they have 2 tags/folders attached to them - 1 private and 1 public? Is there another way to quickly share/link a folder to my public folder without actually moving it away from my private to my public folder?

    Read the article

  • Decrypting Windows XP encrypted files from an old disk

    - by Uri Cohen
    I had an old Windows XP machine with an encrypted directory. When moving to a new Win7 machine I connected the old disk as a slave in the new machine, and hence cannot access the encrypted files. Chances don't seem good as documentation warns you: "Do not Delete or Rename a User's account from which will want to Recover the Encrypted Files. You will not be able to de-crypt the files using the steps outlined above." On the other hand, I have full access to the machine, so maybe there's a utility which can extract the keys and use the to decrypt the files... BTW, I didn't have a password in the old machine, if it's relevant. Ideas, anyone? Thanks!

    Read the article

  • Problem launch Java on Debian: "error while loading shared libraries: libjli.so"

    - by aetaur
    I'm trying to launch Java: $ java -version java: error while loading shared libraries: libjli.so: cannot open shared object file: No such file or directory $ ldd /usr/lib/jvm/java-6-openjdk/jre/bin/java linux-gate.so.1 => (0xb779f000) libz.so.1 => /usr/lib/libz.so.1 (0xb7780000) libpthread.so.0 => /lib/i686/cmov/libpthread.so.0 (0xb7767000) libjli.so => /usr/lib/jvm/java-6-openjdk/jre/bin/../lib/i386/jli/libjli.so (0xb7762000) libdl.so.2 => /lib/i686/cmov/libdl.so.2 (0xb775e000) libc.so.6 => /lib/i686/cmov/libc.so.6 (0xb7603000) /lib/ld-linux.so.2 (0xb77a0000 $ ls /usr/lib/jvm/java-6-openjdk/jre/bin/../lib/i386/jli/ libjli.so However Java does work under root: $ sudo java -version java version "1.6.0_18" OpenJDK Runtime Environment (IcedTea6 1.8.7) (6b18-1.8.7-2~lenny1) OpenJDK Client VM (build 14.0-b16, mixed mode, sharing) How can I launch Java as a regular user without errors?

    Read the article

  • missing libjpeg.so.62 from ia32 shared library

    - by user170200
    I am trying to install a chemical/molecular biology modeling program called Molsoft ICM-Pro. Initially after downloading the program and trying to open, it gave me error messages that I was missing shared libraries, and after talking with my network administrator he recommended I install the ia32 shared libraries using sudo apt-get install ia32-libs Which gives sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. so I am assuming the libraries installed correctly, but now when I try to run the program I get this error: ubuntu:/home/reilly/icmd icm icm: error while loading shared libraries: libjpeg.so.62: cannot open shared object file: No such file or directory So my question is, where can I get the library containing libjpeg.so.62? Additionally, I was told I would need libXmu.so.6 and libtiff.so.3 . Is there a shared library that could be missing that would contain these files? I am an ubuntu noob, so sorry if the information I provided was unclear. Any help would be immensely appreciated! btw I am using ubuntu 12.04 dual boot with windows on an HP pavilion dv6

    Read the article

< Previous Page | 492 493 494 495 496 497 498 499 500 501 502 503  | Next Page >