Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1399/1620 | < Previous Page | 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406  | Next Page >

  • Windows 2008 RemoteAPP client disconnects within a matter of minutes.

    - by Jeroen Wilke
    I'm having an odd problem with Windows 2008 TS, and remote applications specifically. The situation is as follows: TS idle timeout is disabled via GPO TS terminating disconnected sessions after 1hr (via GPO) My users can log on to the Terminal server, and get a full desktop, OR via rdp files that give access to a few remote applications. When a user connects to a full desktop, everything is fine and dandy, they will remain logged on indefinately, and when they disconnect the session is terminated after an hour. however, when a user connects using a remote application link, the client seems to disconnect after only a few minutes of inactivity, when you click the window, the session reconnects. EventID's on TS server: 4779: This event is generated when a user disconnects from an existing Terminal Services session, or when a user switches away from an existing destop using Fast User Switching. 4778 : This event is generated when a user reconnects to an existing Terminal Services session, or when a user switches to an existing desktop using Fast User Switching users are connecting directly to 3389, not using a TS-gateway at the moment. This behavior is consistent on different clients that we have, Full desktop is fine, RemoteAPP constantly disconnects. The .rdp file used doesn't list any interesting parameters, aside from what application to launch, and where to find it. Can someone explain to me how there can be a difference in behaviour between full desktop, and remoteapp ? since essentially they use the exact same client ? Regards Jeroen

    Read the article

  • kickstart ks.cfg: Where should 'url' point?

    - by Stefan Lasiewski
    I have a kickstart file (ks.cfg) on a floppy (Old style). I am trying to install CentOS 5.4. The top of my ks.cfg says this: install # Install from local cdrom or over the network. #cdrom url --url http://kickstart.example.org/pub/centos/5.4/ On the Apache server side, this command is failing with these 404s: kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:30 -0700] "GET /pub/centos/5.4///disc1/.discinfo HTTP/1.1" 404 314 "-" "urlgrabber/3.1.0" kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:43 -0700] "GET /pub/centos/5.4/repodata/repomd.xml HTTP/1.1" 404 316 "-" "urlgrabber/3.1.0 yum/3.2.22" It seems that the value of my url doesn't match the directory structure on the server. I swear this worked a few months ago. Someone else maintains the Yum repository, and they say nothing has changed. What should the value of url URL be? Should this only include the OS (/pub/centos/5.4/), or should it include the architecture (/pub/centos/5.4/os/x86_64 )? I see that Kickstart is trying to grab a file called 'repomd.xml', but why is it looking in '/pub/centos/5.4/repodata/repomd.xml', when these files actually exist at '/pub/centos/5.4/os/x86_64/repodata/repomd.xml' and other locations at '/pub/centos/5.4/*/$ARCH/repodata/repomd.xml'? I don't see this documented or explained well in the [RedHat 5 Installation Guide1]

    Read the article

  • Error occurs when Win 7 opens

    - by user1708748
    I have an HP customized laptop barely a week old. Since yesterday it has given me an error message while opening Windows- every time. it gives error code (0x40000015) refers to an exception at location (0x0033fa4d). I have 64 bit Win 7 and had been transferring files. And installing programs. 8 GB Ram 650 gb hard drive. Not sure which program caused this. Q's that I've seen with this error code all say it occurs while shutting down. A search re. error code told me to use a removal tool. When I downloaded the tool,I got a message saying it is not compatible with Windows 7. Is that really possible? Search re. location code yielded nothing I am an intermediate user. I can restore to an earlier time but I don't know which program might do this again.

    Read the article

  • How mod_cache working with "must-revalidate" and "max-age"?

    - by Dmitriy Sosunov
    Quick question before I will explain my flow: ?an mod_cache perform revalidate with if-none-match only if max-age is expired in case if it configured in reverse proxy mode? My goal is to reduce a number of revalidation requests to our the origin server. For instance: The first request goes to the origin server and then mod_cache save a response in to the cache according to header cache-control: max-age. And only when max-age is expired then mod_cache will revalidate with if-none-match. Currently, mod_cache revalidate each request, regardless that max-age is defined or not. My configuration of Apache 2.4.3 (Windows), on linux I see the same behavior that I will show below. ServerName proxy.lo ProxyRequests Off ProxyPreserveHost Off Header set Vary "Accept, Content-Type, Content-Encoding, Accept-Language" RequestHeader set X-Forwarded-Proto "http" # modify header for user agent's Header set Cache-Control "private, no-cache, no-store, no-transform" CacheQuickHandler off CacheDefaultExpire 300 # the origin server do not provide last-modified CacheIgnoreNoLastMod On CacheIgnoreCacheControl On # the origin server define cache-control: private, no-store only for user agents # Therefore, I would like ignore those headers on the proxy server. CacheStorePrivate On CacheStoreNoStore On CacheEnable disk / CacheRoot "C:/Apache.Cache" CacheDirLevels 5 CacheDirLength 4 CacheMinExpire 15 CacheDetailHeader on CacheHeader on KeepAlive Off ProxyPass / http://origin.lo/ ProxyPassReverse / http://origin.lo/ Also, I have turned on debug log level to see how mod_cache handles a content for caching: I provided this to show that mod_proxy always decides that a content isn't fresh. Why?I provided this to show that mod_proxy always decide that a content isn't fresh. Why? max-age was provided (see below). [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(624): [client 192.168.1.100:63741] AH00698: cache: Key for entity /testpage?(null) is http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(569): [client 192.168.1.100:63741] AH00709: Recalled cached URL info header http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(865): [client 192.168.1.100:63741] AH00720: Recalled headers for URL http://proxy.lo/testpage? [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] cache_storage.c(320): [client 192.168.1.100:63741] AH00695: Cached response for /testpage isn't fresh. Adding/replacing conditional request headers. [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(414): [client 192.168.1.100:63741] AH00757: Adding CACHE_SAVE filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(448): [client 192.168.1.100:63741] AH00759: Adding CACHE_REMOVE_URL filter for /testpage [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] mod_proxy.c(1068): [client 192.168.1.100:63741] AH01143: Running scheme http handler (attempt 0) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1976): AH00942: HTTP: has acquired connection for (origin.lo) [Sun Nov 04 11:58:42.899890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2029): [client 192.168.1.100:63741] AH00944: connecting http://origin.lo/testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2151): [client 192.168.1.100:63741] AH00947: connected /testpage to origin.lo:80 [Sun Nov 04 11:58:42.901890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(2554): AH00962: HTTP: connection complete to 192.168.1.100:80 (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [proxy:debug] [pid 6492:tid 1400] proxy_util.c(1991): AH00943: http: has released connection for (origin.lo) [Sun Nov 04 11:58:42.903890 2012] [headers:debug] [pid 6492:tid 1400] mod_headers.c(800): AH01502: headers: ap_headers_output_filter() [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1190): [client 192.168.1.100:63741] AH00769: cache: Caching url: /testpage [Sun Nov 04 11:58:42.903890 2012] [cache:debug] [pid 6492:tid 1400] mod_cache.c(1196): [client 192.168.1.100:63741] AH00770: cache: Removing CACHE_REMOVE_URL filter. [Sun Nov 04 11:58:42.904890 2012] [cache_disk:debug] [pid 6492:tid 1400] mod_cache_disk.c(1318): [client 192.168.1.100:63741] AH00737: commit_entity: Headers and body for URL http://proxy.lo/testpage? cached. The first request to the origin server without mod_proxy to http://origin.lo/ GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The first response from the origin without mod_proxy HTTP/1.1 200 OK Cache-Control: must-revalidate, proxy-revalidate, max-age=30 Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:11:01 GMT Content-Length: 1877 So, I assumed that revalidation must be occur only in 30 seconds after the success response. Is't right? Let's check it:) Within 30 sec, the Google Chrome didn't perform any requests to the origin server to revalidate a request and has return the response from local cache. When max-age is expired, the Google Chrome perform a request to revalidate: GET http://origin.lo/testpage HTTP/1.1 Host: origin.lo Connection: keep-alive Cache-Control: max-age=0 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/xml If-None-Match: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and response: HTTP/1.1 304 Not Modified Cache-Control: must-revalidate, proxy-revalidate, max-age=30 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 X-Powered-By: ASP.NET Date: Sun, 04 Nov 2012 10:16:20 GMT As you can see, all works as expected. User agent revalidates request only when max-age is expired. Let's now try perform the folling flow though mod_proxy (see configuration above). The first request: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the response was: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:23:36 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: MISS from proxy.lo X-Cache-Detail: "cache miss: attempting entity save" from proxy.lo Connection: close Ok, let's see to the disk cache and try to see how request and response was stored. (I cut binary data) http://proxy.lo/testpage? Cache-Control: private, no-cache, no-store, no-transform Content-Type: application/json; charset=utf-8 ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Date: Sun, 04 Nov 2012 10:27:15 GMT Content-Length: 1932 Vary: Accept, Content-Type, Content-Encoding, Accept-Language Host: proxy.lo User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 X-Forwarded-Proto: http Cache-Control: max-age=300, must-revalidate X-Forwarded-For: 192.168.1.100 X-Forwarded-Host: proxy.lo X-Forwarded-Server: origin.lo Ok, what we see? We see that the first request was performed with max-age=300 & must-revalidate Ok, looks good, as for me, lets perform the next call: GET http://proxy.lo/testpage HTTP/1.1 Host: proxy.lo Connection: keep-alive User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Accept: application/json Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 and the second response from mod_proxy: HTTP/1.1 200 OK Date: Sun, 04 Nov 2012 10:31:58 GMT Server: Apache Cache-Control: private, no-cache, no-store, no-transform ETag: "7cf651e2-176f-4ac1-808e-0e0c17cfd0a2" Content-Length: 1932 Vary: Accept,Content-Type,Content-Encoding,Accept-Language X-Cache: REVALIDATE from proxy.lo X-Cache-Detail: "conditional cache hit: entity refreshed" from proxy.lo Connection: close Content-Type: application/json; charset=utf-8 SO, MY QUESTION IS: WHY mod_proxy perform revalidation on each request regardless that max-age is defined? N.B. Apache 2.4.3 Thanks, I would be grateful for any help.

    Read the article

  • Installing mod_mono on Ubuntu: handler doesn't seem to get registered

    - by Trevor Johns
    I'm trying to install mod_mono on Apache 2 (Prefork MPM). I'm using Ubuntu Karmic, and just want an auto-hosting setup (so that any .aspx files are executed, similar to how PHP is normally setup). I did the following to install Mono: $ apt-get install libapache2-mod-mono mono-apache-server2 mono-devel $ a2dismod mod_mono $ a2enmod mod_mono_auto I've confirmed that mod_mono is getting loaded by Apache. However, any .aspx pages I try to load are returned unprocessed and still have an application/x-asp-net MIME type. It's as if the mod_mono handler never gets registered with Apache. Here's the contents of /etc/mod_mono_auto.load: LoadModule mono_module /usr/lib/apache2/modules/mod_mono.so And here's /etc/mod_mono_auto.conf: MonoAutoApplication enabled AddType application/x-asp-net .aspx AddType application/x-asp-net .asmx AddType application/x-asp-net .ashx AddType application/x-asp-net .asax AddType application/x-asp-net .ascx AddType application/x-asp-net .soap AddType application/x-asp-net .rem AddType application/x-asp-net .axd AddType application/x-asp-net .cs AddType application/x-asp-net .config AddType application/x-asp-net .dll DirectoryIndex index.aspx DirectoryIndex Default.aspx DirectoryIndex default.aspx I've even tried setting the handler explicitly: AddHandler mono .aspx .ascx .asax .ashx .config .cs .asmx .asp Nothing seems to help. Any ideas how to get this working?

    Read the article

  • Mac OS X: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • Problem using a public key when connecting to a SSH server running on Cygwin

    - by binary255
    We have installed Cygwin on a Windows Server 2008 Standard server and it working pretty well. Unfortunately we still have a big problem. We want to connect using a public key through SSH which doesn't work. It always falls back to using password login. We have appended our public key to ~/.ssh/authorized_keys on the server and we have our private and public key in ~/.ssh/id_dsa respective ~/.ssh/id_dsa.pub on the client. When debugging the SSH login session we see that the key is offered by the server apparently rejects it by some unknown reason. The SSH output when connecting from an Ubuntu 9.10 desktop with debug information enabled: $ ssh -v 192.168.10.11 OpenSSH_5.1p1 Debian-6ubuntu2, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /home/myuseraccount/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for debug1: Connecting to 192.168.10.11 [192.168.10.11] port 22. debug1: Connection established. debug1: identity file /home/myuseraccount/.ssh/identity type -1 debug1: identity file /home/myuseraccount/.ssh/id_rsa type -1 debug1: identity file /home/myuseraccount/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-6ubuntu2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '192.168.10.11' is known and matches the RSA host key. debug1: Found key in /home/myuseraccount/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /home/myuseraccount/.ssh/id_dsa debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Trying private key: /home/myuseraccount/.ssh/identity debug1: Trying private key: /home/myuseraccount/.ssh/id_rsa debug1: Next authentication method: keyboard-interactive debug1: Authentications that can continue: publickey,password,keyboard-interactive debug1: Next authentication method: password [email protected]'s password: The version of Cygwin: $ uname -a CYGWIN_NT-6.0 servername 1.7.1(0.218/5/3) 2009-12-07 11:48 i686 Cygwin The installed packages: $ cygcheck -c Cygwin Package Information Package Version Status _update-info-dir 00871-1 OK alternatives 1.3.30c-10 OK arj 3.10.22-1 OK aspell 0.60.5-1 OK aspell-en 6.0.0-1 OK aspell-sv 0.50.2-2 OK autossh 1.4b-1 OK base-cygwin 2.1-1 OK base-files 3.9-3 OK base-passwd 3.1-1 OK bash 3.2.49-23 OK bash-completion 1.1-2 OK bc 1.06-2 OK bzip2 1.0.5-10 OK cabextract 1.1-1 OK compface 1.5.2-1 OK coreutils 7.0-2 OK cron 4.1-59 OK crypt 1.1-1 OK csih 0.9.1-1 OK curl 7.19.6-1 OK cvs 1.12.13-10 OK cvsutils 0.2.5-1 OK cygrunsrv 1.34-1 OK cygutils 1.4.2-1 OK cygwin 1.7.1-1 OK cygwin-doc 1.5-1 OK cygwin-x-doc 1.1.0-1 OK dash 0.5.5.1-2 OK diffutils 2.8.7-2 OK doxygen 1.6.1-2 OK e2fsprogs 1.35-3 OK editrights 1.01-2 OK emacs 23.1-10 OK emacs-X11 23.1-10 OK file 5.04-1 OK findutils 4.5.5-1 OK flip 1.19-1 OK font-adobe-dpi75 1.0.1-1 OK font-alias 1.0.2-1 OK font-encodings 1.0.3-1 OK font-misc-misc 1.1.0-1 OK fontconfig 2.8.0-1 OK gamin 0.1.10-10 OK gawk 3.1.7-1 OK gettext 0.17-11 OK gnome-icon-theme 2.28.0-1 OK grep 2.5.4-2 OK groff 1.19.2-2 OK gvim 7.2.264-1 OK gzip 1.3.12-2 OK hicolor-icon-theme 0.11-1 OK inetutils 1.5-6 OK ipc-utils 1.0-1 OK keychain 2.6.8-1 OK less 429-1 OK libaspell15 0.60.5-1 OK libatk1.0_0 1.28.0-1 OK libaudio2 1.9.2-1 OK libbz2_1 1.0.5-10 OK libcairo2 1.8.8-1 OK libcurl4 7.19.6-1 OK libdb4.2 4.2.52.5-2 OK libdb4.5 4.5.20.2-2 OK libexpat1 2.0.1-1 OK libfam0 0.1.10-10 OK libfontconfig1 2.8.0-1 OK libfontenc1 1.0.5-1 OK libfreetype6 2.3.12-1 OK libgcc1 4.3.4-3 OK libgdbm4 1.8.3-20 OK libgdk_pixbuf2.0_0 2.18.6-1 OK libgif4 4.1.6-10 OK libGL1 7.6.1-1 OK libglib2.0_0 2.22.4-2 OK libglitz1 0.5.6-10 OK libgmp3 4.3.1-3 OK libgtk2.0_0 2.18.6-1 OK libICE6 1.0.6-1 OK libiconv2 1.13.1-1 OK libidn11 1.16-1 OK libintl3 0.14.5-1 OK libintl8 0.17-11 OK libjasper1 1.900.1-1 OK libjbig2 2.0-11 OK libjpeg62 6b-21 OK libjpeg7 7-10 OK liblzma1 4.999.9beta-10 OK libncurses10 5.7-18 OK libncurses8 5.5-10 OK libncurses9 5.7-16 OK libopenldap2_3_0 2.3.43-1 OK libpango1.0_0 1.26.2-1 OK libpcre0 8.00-1 OK libpixman1_0 0.16.6-1 OK libpng12 1.2.35-10 OK libpopt0 1.6.4-4 OK libpq5 8.2.11-1 OK libreadline6 5.2.14-12 OK libreadline7 6.0.3-2 OK libsasl2 2.1.19-3 OK libSM6 1.1.1-1 OK libssh2_1 1.2.2-1 OK libssp0 4.3.4-3 OK libstdc++6 4.3.4-3 OK libtiff5 3.9.2-1 OK libwrap0 7.6-20 OK libX11_6 1.3.3-1 OK libXau6 1.0.5-1 OK libXaw3d7 1.5D-8 OK libXaw7 1.0.7-1 OK libxcb-render-util0 0.3.6-1 OK libxcb-render0 1.5-1 OK libxcb1 1.5-1 OK libXcomposite1 0.4.1-1 OK libXcursor1 1.1.10-1 OK libXdamage1 1.1.2-1 OK libXdmcp6 1.0.3-1 OK libXext6 1.1.1-1 OK libXfixes3 4.0.4-1 OK libXft2 2.1.14-1 OK libXi6 1.3-1 OK libXinerama1 1.1-1 OK libxkbfile1 1.0.6-1 OK libxml2 2.7.6-1 OK libXmu6 1.0.5-1 OK libXmuu1 1.0.5-1 OK libXpm4 3.5.8-1 OK libXrandr2 1.3.0-10 OK libXrender1 0.9.5-1 OK libXt6 1.0.7-1 OK links 1.00pre20-1 OK login 1.10-10 OK luit 1.0.5-1 OK lynx 2.8.5-4 OK man 1.6e-1 OK minires 1.02-1 OK mkfontdir 1.0.5-1 OK mkfontscale 1.0.7-1 OK openssh 5.4p1-1 OK openssl 0.9.8m-1 OK patch 2.5.8-9 OK patchutils 0.3.1-1 OK perl 5.10.1-3 OK rebase 3.0.1-1 OK run 1.1.12-11 OK screen 4.0.3-5 OK sed 4.1.5-2 OK shared-mime-info 0.70-1 OK tar 1.22.90-1 OK terminfo 5.7_20091114-13 OK terminfo0 5.5_20061104-11 OK texinfo 4.13-3 OK tidy 041206-1 OK time 1.7-2 OK tzcode 2009k-1 OK unzip 6.0-10 OK util-linux 2.14.1-1 OK vim 7.2.264-2 OK wget 1.11.4-4 OK which 2.20-2 OK wput 0.6.1-2 OK xauth 1.0.4-1 OK xclipboard 1.1.0-1 OK xcursor-themes 1.0.2-1 OK xemacs 21.4.22-1 OK xemacs-emacs-common 21.4.22-1 OK xemacs-sumo 2007-04-27-1 OK xemacs-tags 21.4.22-1 OK xeyes 1.1.0-1 OK xinit 1.2.1-1 OK xinput 1.5.0-1 OK xkbcomp 1.1.1-1 OK xkeyboard-config 1.8-1 OK xkill 1.0.2-1 OK xmodmap 1.0.4-1 OK xorg-docs 1.5-1 OK xorg-server 1.7.6-2 OK xrdb 1.0.6-1 OK xset 1.1.0-1 OK xterm 255-1 OK xz 4.999.9beta-10 OK zip 3.0-11 OK zlib 1.2.3-10 OK zlib-devel 1.2.3-10 OK zlib0 1.2.3-10 OK The ssh deamon configuration file: $ cat /etc/sshd_config # $OpenBSD: sshd_config,v 1.80 2008/07/02 02:24:18 djm Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/bin:/usr/sbin:/sbin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # Disable legacy (protocol version 1) support in the server for new # installations. In future the default will change to require explicit # activation of protocol 1 Protocol 2 # HostKey for protocol version 1 #HostKey /etc/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh_host_rsa_key #HostKey /etc/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 1024 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes StrictModes no #MaxAuthTries 6 #MaxSessions 10 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. #UsePAM no AllowAgentForwarding yes AllowTcpForwarding yes GatewayPorts yes X11Forwarding yes X11DisplayOffset 10 X11UseLocalhost no #PrintMotd yes #PrintLastLog yes TCPKeepAlive yes #UseLogin no UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner none # override default of no subsystems Subsystem sftp /usr/sbin/sftp-server # Example of overriding settings on a per-user basis #Match User anoncvs #X11Forwarding yes #AllowTcpForwarding yes #ForceCommand cvs server I hope this information is enough to solve the problem. In case any more is needed please comment and I'll add it. Thank you for reading!

    Read the article

  • Windows Server 2008 R2 install reboots unexpectedly during "Completing installation" phase

    - by knda
    I am attempting to install Windows Server 2008 R2 onto a Cisco UCS C201 M2 rack mounted server but am having major difficulties and wondering if anyone has some insight or items they could recommend for me to look at to get this one resolved. Installation is being attempted via the Cisco remote console (using CIMC's Virtual dvd-rom).. following the first phase of Setup where the installation files are copied to the target hard drive, then a reboot occurs to load Setup from the HDD, mid-way in the "Completing Installation" phase the system then reboots unexpectedly. System configuration Cisco UCS C201 M2 (2RU rack mounted server) 16GB RAM, 2x 73GB 15K SAS, 4x 300GB 10k SAS Add-on cards - Intel quad-port GigE card (no fibre channel cards) Storage - LSI MegaRAID SAS 9261-8i. onboard SATA is disabled (no SATA drives connected) KVM - Belkin No physical DVD-ROM.. :( I have... Run memtest86+, no RAM faults Disabled/enabled SATA support (BIOS) Attempted install from USB DVD-ROM, no effect Attempted unattended install scripted via Cisco Configuration Manager DVD provided Removed Belkin KVM in case that was causing drama Discovered that the Cisco website is "awesome" for searching for PDFs/Drivers cough, reverted back to Google Downloaded latest LSI drivers from LSI's site and used during Server 2008 install checked Windows ISO against checksum's from MS site checked Windows ISO by using it for an install in a VM Running out of ways to troubleshoot this as I am not sure how to enable any sort of 'verbose' mode during the setup process. Next step I have planned is to remove the Intel NIC and try the installation again.. Edit: Problem was the "Cisco INTEL QUAD PT GBE" (1000/PT) .. will have to see if this card is faulty or if it's just drivers.. thanks for the help.

    Read the article

  • How to compress .pdfs in word 2007?

    - by chobo2
    Hi I am trying to send my cover letter and resume away but apparently it is too big to send through craigs list(my computer says the total size is 500kb) as it has a 600kb limit(so small should be at least a meg). Hi there. You recently tried to email Some job Email, an anonymous craigslist address. However, your message was too big to be sent through our system. Craigslist has a 600KB limit on the messages we'll send. Please reduce the size of your mail and try again. Thanks for using craigslist. So when I convert my word 2007(.docx) files to pdf they become huge. Like they got from 32kb to 320kb. So is there a way I can either get around craigslist limits or compress my pdfs a bit to make it happy. I don't want to send zips and stuff since the person who gets it might not even know what to do. I rather not send .docx since not sure if will have office 2007 or the compatibility view installed and I rather just send it as pdf(as some place require it anyways to be in pdfs). Thanks

    Read the article

  • Rebinding Keys on a Dell Keyboard

    - by Maarx
    I have a Dell Multimedia Keyboard, similar to this one: It has many non-standard keys, like the small circular ones across the top, and the "Multimedia" keys above INSERT/HOME/PAGE_UP. They can be rebound through simple registry entries. Some sample ones are included below: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey] [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\15] "ShellExecute"="C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe http://mail.google.com/mail/#inbox" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\16] "Association"=".cda" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\17] "ShellExecute"="C:\\Windows\\System32\\SnippingTool.exe" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\18] "ShellExecute"="calc.exe" [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AppKey\7] "Association"="http" I've rebound the "MAIL" key to, instead of booting Outlook, to booting Firefox directed at my G-Mail account. I've rebound the button that would normally open "MY COMPUTER" to instead boot the Windows 7 "Snipping Tool", something I find very useful. Now, I'm looking to do some other things that I don't already know how to do. Note that answering this question doesn't necessarily require any knowledge about the keyboard or rebinding the keys: I can add, for any given key, a "ShellExecute" entry, and it will simply execute the following command as if it was typed at a Command Prompt. (I'm aware I dumbed that down rather significantly, but bear with me. I'm not really a Windows guy myself.) I use the volume knob for it's intended purpose, to change volume. I would like to change, however, a different key, to "reset" the Windows volume level back to exactly 50%, or, as it refers to it, "50", on it's 0-100 scale. I'm looking for the "program" (what I would type at a command prompt? these are still just Sys32 programs in the PATH, aren't they?) that, I imagine, would take arguments, to change Sound/Volume settings under Windows 7. Perhaps, for clarification, something that might take the form "C: SetVolume -slevel 50" or something.

    Read the article

  • Synchronization of volume snapshots when doing whole system backups

    - by intuited
    Is there a way to guarantee consistency across volumes when doing backups from LVM snapshots? Consider this scenario: Some system upgrade is in progress. It will write some files to the /usr volume, and once completed, will record success in the /var volume. As the upgrade is just about complete, I run a backup script that creates snapshots of the /usr and /var volumes, along with the rest of the system's volumes, and proceeds to create backups from those snapshots. Just before the upgrade's last write/flush on the /usr volume completes, the backup script takes its snapshot of /usr. That write completes, and the upgrade operation's success is quickly recorded in the nebulous depths of /var. The backup script takes a snapshot of /var. The backup script creates backups from the snapshots it has, er, snapshotted. So the result of all of this tomfoolery is that the resulting /usr backup contains a file which is missing a few bits, and the /var backup contains metadata indicating that that file is complete and approved for use. Without delving into the details of which operating systems' system upgrade systems would be unfazed by such trifles, is there a way to avoid such problems? At the least this seems like it could cause some application to fail unexpectedly after restoration of such a backup.

    Read the article

  • 1 document pending in printer queue in System Tray that won't go away

    - by White Phoenix
    Edit: I didn't want to do it, but I restarted my computer - cleared the problem right away. Though if I were running a Windows 2003/2008 server, I would hate to have to restart the domain controller just to get rid of this irritating problem. If I run into this problem again, I'm going to try that remove printer/reinstall printer thing. Thanks for your help @Psychogeek. Points for your attempt. Running Windows 7, 32-bit Professional. My printer is an HP OfficeJet Wireless 8500. It's connected to my network wirelessly through TCP/IP as a standalone device. I was having some print problems awhile back and had to do some print spooler stuff as part of my troubleshooting (stopping the Print Spooler service, clearing the print spooler files from C:\Windows\System32\spool\PRINTERS and then restarting the service). I've finally narrowed it down to it being application specific, so that's that. However, as a leftover from all that troubleshooting, my printer icon is stuck in the tray - when I mouseover the icon, Windows says that there is 1 document(s) pending for my username. However, when I open up that printer's queue, there's nothing in there. I restarted the Printer Spooler service and also checked C:\Windows\System32\spool\PRINTERS if there's anything in there - nothing. I did a quick Google search and an answer from one of those "reps" at the Microsoft Socialnet site says for me to uninstall and reinstall the printer. The funny thing is, when I send print jobs, they print just fine - that 1 mystery document stuck in queue isn't stopping anything from happening. Short of having to do that, are there any other quick troubleshooting steps I may be missing?

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • hadoop: port appears open locally but not remotelly

    - by miguel
    I am new to linux and hadoop and I am having the same issue as in this question. I think I understand what is causing it but I don't know how to solve it (Don't know what they mean by "Edit the Hadoop server's configuration file so that it includes its NIC's address."). The other post that they link says that the configuration files should refer to the machine's externally accessible host name. I think I got this right as every hadoop configuration file refers to "master" and the etc/hosts file lists the master by its private IP address. How can I solve this? Edit: I have 5 nodes: master, slavec, slaved, slavee and slavef all running debian. This is the hosts file in master: 127.0.0.1 master 10.0.1.201 slavec 10.0.1.202 slaved 10.0.1.203 slavee 10.0.1.204 slavef this is the hosts file in slavec (it looks similar in the other slaves): 10.0.1.200 master 127.0.0.1 slavec 10.0.1.202 slaved 10.0.1.203 slavee 10.0.1.204 slavef the masters file in master: master the slaves file in master: master slavec slaved slavee slavef the masters and slaves file in slavex has only one line: slavex

    Read the article

  • IIS FTP error: 426 Connection closed; transfer aborted.

    - by Jiaoziren
    Hi, I have an IIS FTP set up on Windows 2003 SP2 (S1). Everyday in the early morning, a script on another server (S2) will run and initiate FTP transfer of pulling log files from S1 to S2. The FTP client we're using is built-in FTP.exe in Windows 2000 on S2. Recently we replaced S1 with a new server however we kept the IP address. There are multiple IP addresses on new S1. Ever since the new S1 was in place, the '426 Connection closed; transfer aborted.' errors haven been occuring randomly. The log indicated that the transfer started ok however the file cannot be transferred completely, as per log below: mget access*.log 200 Type set to A. 200 PORT command successful. 150 Opening ASCII mode data connection for access02232010.log(205777167 bytes). 426 Connection closed; transfer aborted. ftp: 20454832 bytes received in 283.95Seconds 72.04Kbytes/sec. The firewall monitor suggested that the connection was setup in passive mode however I've been told that MS FTP.exe doesn't support passive mode. Though I can see the response of 'entering passive mode' from server when typing in 'quote pasv'. My network admin has told me to try the transfer in active mode however I don't know how to open active mode on client side. It's getting really frustrating. Wish someone here has the right knowledge/experience could shed me a light. Cheers.

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. They pointed me here. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. I see where this has been asked before. I want to add that I am using xampp. In Mac OS 10.6.8. I tried changing the php.ini SMTP command to macintosh-3.local. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Setting up a localhost mail server on Mac OSX

    - by Thom
    I asked this over on stackoverflow. I would love to be able to test php webapps that require emailing registration info etc. on my mac. I downloaded a version of CommuniGate Pro. I need to mail either to an account inside or outside (whichever is best) of the localhost. Again this would be used for testing purposes to verify and debug my code prior to uploading to a hosting service. Any ideas, help and/or examples would be very much appreciated. If it would be easier I could go over to Windows XP. That would just mean setting up wamp and transfering my files over from the mac side via dropbox. I got the local mailserver to work so I can send emails between accounts. However, I cannot seem to get the php code to work. I know that I am missing something. <?php function email($to, $subject, $body, $headers) { $headers = 'MIME-Version: 1.0' . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; mail($to, $subject, $body, $headers); } ?>

    Read the article

  • Installing Oracle 11gR2 on RHEL 6.2

    - by Chris
    Hello all I'm having some difficulty installing Oracle 11gR2 on RHEL 6.2 I have compiled a giant list of every single step I have taken so far I installed RHEL 6.2 on VMWARE it did it's easy install automatically I Selected 4gb of memory Selected max size of 80Gb Selected 2 processors Sorry for the bad styling copy paste isn't working correctly The version of oracle i downloaded is Linux x86-64 11.2.0.1 I am installing this on a local machine NOT a remote machine I followed the following documentation http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm I bolded the steps which I was least sure about from my research Easy installed with RHEL 6.2 for VMWARE Registered with red hat so I can get updates Reinstalled vmware-tools by pressing enter at every choice Sudo yum update at the end something about GPG key selected y then y Checked Memory Requirements grep MemTotal /proc/meminfo MemTotal: 3921368 kb uname -m x86_64 grep SwapTotal /proc/meminfo SwapTotal: 6160376 kb free total used free shared buffers cached Mem: 3921368 2032012 1889356 0 76216 1533268 -/+ buffers/cache: 422528 3498840 Swap: 6160376 0 6160376 df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 1.9G 276K 1.9G 1% /dev/shm df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / tmpfs 1.9G 276K 1.9G 1% /dev/shm /dev/sda1 291M 58M 219M 21% /boot All looked fine to me except maybe for swap? Software Requirements cat /proc/version Linux version 2.6.32-220.el6.x86_64 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:03:13 EST 2011 uname -r 2.6.32-220.el6.x86_64 (same as above but whatever) According to the tutorial should be On Red Hat Enterprise Linux 6 2.6.32-71.el6.x86_64 or later These are the versions of software I have installed binutils-2.20.51.0.2-5.28.el6.x86_64 compat-libcap1-1.10-1.x86_64 compat-libstdc++-33-3.2.3-69.el6.x86_64 compat-libstdc++-33.i686 0:3.2.3-69.el6 gcc-4.4.6-3.el6.x86_64 gcc-c++.x86_64 0:4.4.6-3.el6 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.47.el6_2.12.i686 glibc-devel-2.12-1.47.el6_2.12.x86_64 glibc-devel.i686 0:2.12-1.47.el6_2.12 ksh.x86_64 0:20100621-12.el6_2.1 libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.x86_64 libstdc++.i686 0:4.4.6-3.el6 libstdc++-devel.i686 0:4.4.6-3.el6 libstdc++-devel-4.4.6-3.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6.x86_64 libaio-devel-0.3.107-10.el6.i686 make-3.81-19.el6.x86_64 sysstat-9.0.4-18.el6.x86_64 unixODBC-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.i686 unixODBC-2.2.14-11.el6.i686 8. Probably screwed up here or step 9 /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba(not sure why this isn't in the tutorial) /usr/sbin/useradd -g oinstall -G dba oracle passwd oracle /sbin/sysctl -a | grep sem Xkernel.sem = 250 32000 32 128 /sbin/sysctl -a | grep shm kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 vm.hugetlb_shm_group = 0 /sbin/sysctl -a | grep file-max Xfs.file-max = 384629 /sbin/sysctl -a | grep ip_local_port_range Xnet.ipv4.ip_local_port_range = 32768 61000 /sbin/sysctl -a | grep rmem_default Xnet.core.rmem_default = 124928 /sbin/sysctl -a | grep rmem_max Xnet.core.rmem_max = 131071 /sbin/sysctl -a | grep wmem_max Xnet.core.wmem_max = 131071 /sbin/sysctl -a | grep wmem_default Xnet.core.wmem_default = 124928 Here is my sysctl.conf file I only added the items that were bigger: Kernel sysctl configuration file for Red Hat Linux # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and sysctl.conf(5) for more details. Controls IP packet forwarding net.ipv4.ip_forward = 0 Controls source route verification net.ipv4.conf.default.rp_filter = 1 Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 Controls whether core dumps will append the PID to the core filename. Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 su - oracle ulimit -Sn 1024 ulimit -Hn 1024 ulimit -Su 1024 ulimit -Hu 30482 ulimit -Su 1024 ulimit -Ss 10240 ulimit -Hs unlimited su - nano /etc/security/limits.conf *added to the end of the file * oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 exit exit su - mkdir -p /app/ chown -R oracle:oinstall /app/ chmod -R 775 /app/ 9. THIS IS PROBABLY WHERE I MESSED UP I then exited out of the root account so now I'm back in my account chris then I su - oracle echo $SHELL /bin/bash umask 0022 (so it should be set already to what is neccesary) Also from what I have read I do not need to set the DISPLAY variable because I'm installing this on the localhost I then opened the .bash_profile of the oracle and changed it to the following .bash_profile Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi User specific environment and startup programs PATH=$PATH:$HOME/bin; export PATH ORACLE_BASE=/app/oracle ORACLE_SID=orcl export ORACLE_BASE ORACLE_SID I then shutdown the virtual machine shared my desktop folder from my windows 7 then turned back on the virtual machine logged in as chris opened up a terminal then: su - for some reason the shared folder didn't appear so I reinstalled vmware tools again and restarted then same as before su - cp -R linux_oracle/database /db; chown -R oracle:oinstall /db; chmod -R 775 /db; ll /db drwxrwxr-x. 8 oracle oinstall 4096 Jun 5 06:20 database exit su - oracle cd /db/database ./runInstaller AND FINALLY THE INFAMOUS JAVA:132 ERROR MESSAGE Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 65646 MB Passed Checking swap space: must be greater than 150 MB. Actual 6015 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-05_06-47-12AM. Please wait ...[oracle@localhost database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2012-06-05_06-47-12AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1647) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java:968) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1668) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Toolkit.java:1509) at java.awt.Toolkit.(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source) at com.jgoodies.looks.LookUtils.(Unknown Source) at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783) at javax.swing.UIManager.setLookAndFeel(UIManager.java:480) at oracle.install.commons.util.Application.startup(Application.java:758) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)

    Read the article

  • Checksum for Protecting Read-Only Documents

    - by Kim
    My father owns a small business and has to hand over several year's worth of financial documents to his insurance's auditor. He's asked me to go through and make sure everything is "read-only" so the data (the files) absolutely, positively cannot be modified or manipulated (he's a bit paranoid). We're talking about 20,000 documents (emails, spreadsheets, etc.). My first inclination was to place everything inside of one root folder ("mydadsdocs/") and then write a script that recursively traversed its directory subtree and set the file permissions to read-only. But then I got to thinking: that's a lot of work for me to do to satisfy an old man who is just being paranoid, and afterall, if someone really wanted to modify a read-only file, it would be pretty easy to change file permissions anyways, soo.... Is there like a checksum I could run on the root folder, something that was very quick and easy, and that would basically "stamp" the data in that folder so if someone did change it, my father would have someone of knowing/proving it? If so, how? If not, any other recommendations that are quick, cheap (free) and effective?

    Read the article

  • Xvnc4 started from xinetd only displays empty gray X screen

    - by Scott Thomason
    I'm attempting to setup an Ubuntu 10.10 box so that anyone can connect to port 5900 and be greeted by the gdm login manager. To do so, I added a vnc entry in /etc/services and I am starting Xvnc4 using this xinetd config file: service vnc { protocol = tcp socket_type = stream wait = no user = nobody server = /usr/bin/Xvnc server_args = -geometry 1000x700 -depth 24 -broadcast -inetd -once -securitytypes None } This kind of works...I can start multiple sessions all to port 5900, and I get an X screen. The problem is that I only get an empty, gray X screen with no applications started. I know when you run vncserver from the command line it will look to your ~/.vnc/ directory for your passwd and xstartup files, and I think what I want to do is put "gnome-session" into the xstart file. However, which xstartup file? The running user is "nobody" who obviously doesn't have a ~/.vnc/ directory. I tried a /root/.vnc/xstartup file and a ~scott/.vnc/xstartup file and it doesn't look like they were even read. I changed the xinetd vnc service so that it would "strace" Xvnc4. I looked thru all the "open" lines and didn't get a clue as to what file it was trying to read for xstart. Can anyone help? I just want a terminal server where the user is presented with a gdm login screen.

    Read the article

  • Trying to build OpenSimulator.. nant fails

    - by Gary
    Output of nant is: Buildfile: file:///root/opensim-0.6.8-release/OpenSim.build Target framework: Mono 2.0 Profile Target(s) specified: build [echo] Using 'mono-2.0' Framework init: Debug: [echo] Platform unix build: [nant] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build build Buildfile: file:///root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug [csc] Compiling 29 files to '/root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug/OpenSim.Framework.Servers.HttpServer.dll'. [csc] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/AsynchronousRestObjectRequester.cs(103,41): error CS0246: The type or namespace name `TResponse' could not be found. Are you missing a using directive or an assembly reference? [csc] Compilation failed: 1 error(s), 0 warnings BUILD FAILED - 0 non-fatal error(s), 1 warning(s) /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build(14,6): External Program Failed: /usr/lib/pkgconfig/../../lib/mono/2.0/gmcs.exe (return code was 1) Total time: 1.2 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 1.3 seconds. OS is Fedora 7. Any ideas appreciated. :)

    Read the article

  • Problems installing Adobe Premiere Elements 8 Content features

    - by Walt Maken
    I recently purchased Adobe Photoshop & Premiere Elements 8. Photoshop Elements 8 installed fine, along with the additional material available via internet download. Premiere Elements 8 installed ok from the DVD. During the Premiere Elements 8 startup process the following message appears: A reduced set of content (Instant Movie Themes, Title and Menu Templates,etc) has been installed. To install the full content set, please insert your Content DVD and run Setup.exe. If you do not have a Content DVD please visit http://www.adobe.com/go/pre_additional_downloads to download the content installer. Since I didn't have the Content DVD, I did the download, which took nearly 12 hours. The extract appeared to complete at 100%, but then immediately gave the error message "A problem occurred while extracting some files. Check available space on your computer and the write privileges on the destination folder." Why would it show this error message if it had completed the extract process 100%? What step(s) do I take now to have Content installed? Do I need to go thru the 12 hour download again or, hopefully, is there something I can do that will make it unnecessary to download again?

    Read the article

  • Upgrade Debian to unstable on VirtualBox: udev problem

    - by Ken
    I'm running Debian stable on VirtualBox on Windows Vista 64-bit Ultimate. It's been running great, but I needed some newer packages, so I put sid in my sources.list to upgrade to unstable (as I've done a dozen times on various Linux boxes over the years). When I upgraded, something went screwy and it asked me to run apt-get -f install to fix them, which gave this: (Reading database ... 77846 files and directories currently installed.) Preparing to replace udev 0.125-7+lenny3 (using .../archives/udev_151-3_amd64.deb) ... Since release 150, udev requires that support for the CONFIG_SYSFS_DEPRECATED feature is disabled in the running kernel. Please upgrade your kernel before or while upgrading udev. AT YOUR OWN RISK, you can force the installation of this version of udev WHICH DOES NOT WORK WITH YOUR RUNNING KERNEL AND WILL BREAK YOUR SYSTEM AT THE NEXT REBOOT by creating the /etc/udev/kernel-upgrade file. There is always a safer way to upgrade, do not try this unless you understand what you are doing! dpkg: error processing /var/cache/apt/archives/udev_151-3_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current start runlevel(s) (2 3 4 5) of script `vboxadd-x11' overwrites defaults (empty). insserv: warning: current stop runlevel(s) (0 1 6) of script `vboxadd-x11' overwrites defaults (empty). Errors were encountered while processing: /var/cache/apt/archives/udev_151-3_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I have the VirtualBox extensions installed, and it looks like the udev install doesn't know what to make of them. But I don't know exactly where/how they're installed (I just ran the VBoxLinuxAdditions-amd64.run script, basically), so I don't know how to disable them. Any ideas? Thanks!

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS. [migrated]

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • adding trac to apache2 configuration file

    - by Michael
    I currently have apache2 running from a mythtv/mythweb install. This made two config files available in sites-enabled. One of them ("default-mythbuntu") has the VirtualHost directive and seems like a normal file (except a change to the directory index). There is also a mythweb.conf file that only has directives and sets various variables for mythweb. I want to host a trac site as well. According to this site: http://trac.edgewall.org/wiki/TracOnUbuntu there are some setting I need to set for the Trac site. They give me directions for making a VirtualHost, but I think I should use the current VirtualHost and just add the directives (I'll need to change the default location they point to from the site above to just point to the trac location). Where should I put these directives? Can I make a Trac.conf with just the settings for Trac and enable it, or do I need to put them in the default-mythbuntu file? I don't like that later because it doesn't separate out the Trac configs. How does Apache know that the mythweb (and the trac.conf I want to make) belong to the virtualhost defined in the default-mythbuntu? It is the only virtualhost that is being defined on my system if that matters.

    Read the article

< Previous Page | 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406  | Next Page >