Search Results

Search found 6992 results on 280 pages for 'exist'.

Page 249/280 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Win 8: Adding a boot volume to an MBR dynamic disk [NOT about changing to basic disks]

    - by Stilez
    (This is NOT aiming to convert to basic disk. In this question, the disk stays dynamic but becomes bootable) There doesn't seem to be a clear, well stated answer I can find, for the question "What are the criteria for Windows 8 to successfully boot from an MBR dynamic disk", or "how do I fix a dynamic MBR partition that's failing boot"? I've tried to educate myself but can't find crucial information to clear it all up. My existing HDD/SSD setup: DISK 0 ~ 60GB SSD/MBR/basic: (350MB recovery)(60GB windows 8 bootable) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISK 2 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB unallocated)(410GB mirrored data) DISKS 3, 4, 5: (ignored for simplicity: 2xHDD RAID1 + caching SSD) I'm heavy duty on data crunching and virtualisation, just maxxed out 32GB RAM @ 2133 and moved to 4960X + 64GB. Disk 0 is a pure system disk of little value, and virtualisations runs off mirrored SSDs (Samsung 840 Pro 512 x 2) for double speed reading and so they snapshot in reasonable time. I'm using 4 SATA3 ports and the board only has two decent Intel ports (onboard Marvell are poorer quality). I'm wary of choosing between LSI, HighPoint and other 3rd party controllers as I'm unfamiliar with the maze of decent RAID cards (that's a whole other issue!). I want to cut down my SSD needs by moving the boot volume and caching volume to the 840 pros, giving a setup with 2 fewer SSDs: DISK 0 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(60GB boot)(410GB mirrored data) DISK 1 ~ 512GB SSD/MBR/dynamic: (350MB recovery)(30GB cache for the ICH10R mirror)(30GB temp)(410GB mirrored data) DISKS 2, 3: (2xHDD RAID1) Intel's RST allows this, Win 8 allows booting off a MBR/dynamic disk, and the two 60GB SSDs are hardly the fastest SSDs anyway, they'll get repurposed. Moving the caching volume is easy. Moving the boot volume has me stumped. The difficulty is, I'm hitting a wall of knowledge here. I have a UEFI Asus motherboard with an previous traditional MBR/basic boot disk, and I want it to boot from a disk and volume that's MBR/dynamic. The disk copy is physically ok (Partition Wizard Server will copy to dynamic volumes) but then hits a light blue 0xc000000e boot error. No real surprise, I expected to have some boot fixing, but had expected Windows to boot-fix it (all drivers exist), or the usual manual fixes to work. Specifically, I don't know enough, to know what's got to be manually checked and perhaps corrected for the disk to boot (legacy/uefi/bios, odd partitions, boot tables, disk IDs, hidden boot files, oh my!), or if I need to change any of this secure boot/UEFI/legacy stuff in the bios, convert a 512 SSD to basic and then back to dynamic when working, or if the issue is pure OS config using "diskpart", "bootsect" and "bootrec" from the Win8 DVD. The old system disk still boots but I don't know enough to figure what to fix, to make the system boot as I want. The answers probably aren't hard but the real issue is my confusion and missing information. Thanks for helping!

    Read the article

  • How to bind old user's SID to new user to remain NTFS file ownership and permissions after freshly reinstall of Windows?

    - by LiuYan ??
    Each time we reinstalled Windows, it will create a new SID for user even the username is as same as before. // example (not real SID format, just show the problem) user SID -------------------- liuyan S-old-501 // old SID before reinstall liuyan S-new-501 // new SID after reinstall The annoying problem after reinstall is NTFS file owership and permissions on hard drive disk are still associated with old user's SID. I want to keep the ownership and permission setting of NTFS files, then want to let the new user take the old user's SID, so that I can access files as before without permission problem. The cacls command line tool can't be used in such situation, because the file does belongs to new user, so it will failed with Access is denied error. and it can't change ownership. Even if I can change the owership via SubInACL tool, cacls can't remove the old user's permission because the old user does not exist on new installation, and can't copy the old user's permission to new user. So, can we simply bind old user's SID to new user on the freshly installed Windows ? Sample test batch @echo off REM Additional tools used in this script REM PsGetSid http://technet.microsoft.com/en-us/sysinternals/bb897417 REM SubInACL http://www.microsoft.com/en-us/download/details.aspx?id=23510 REM REM make sure these tools are added into PATH set account=MyUserAccount set password=long-password set dir=test set file=test.txt echo Creating user [%account%] with password [%password%]... pause net user %account% %password% /add psgetsid %account% echo Done ! echo Making directory [%dir%] ... pause mkdir %dir% dir %dir%* /q echo Done ! echo Changing permissions of directory [%dir%]: only [%account%] and [%UserDomain%\%UserName%] has full access permission... pause cacls %dir% /G %account%:F cacls %dir% /E /G %UserDomain%\%UserName%:F dir %dir%* /q cacls %dir% echo Done ! echo Changing ownership of directory [%dir%] to [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q echo Done ! echo RunAs [%account%] user to write a file [%file%] in directory [%dir%]... pause runas /noprofile /env /user:%account% "cmd /k echo some text %DATE% %TIME% > %dir%\%file%" dir %dir% /q echo Done ! echo Deleting and Recreating user [%account%] (reinstall simulation) ... pause net user %account% /delete net user %account% %password% /add psgetsid %account% echo Done ! %account% is recreated, it has a new SID now echo Now, use this "same" account [%account%] to access [%dir%], it will failed with "Access is denied" pause runas /noprofile /env /user:%account% "cmd /k cacls %dir%" REM runas /noprofile /env /user:%account% "cmd /k type %dir%\%file%" echo Done ! echo Changing ownership of directory [%dir%] to NEW [%account%]... pause subinacl /file %dir% /setowner=%account% dir %dir%* /q cacls %dir% echo Done ! As you can see, "Account Domain not found" is actually the OLD [%account%] user echo Deleting user [%account%] ... pause net user %account% /delete echo Done ! echo Deleting directory [%dir%]... pause rmdir %dir% /s /q echo Done !

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • Handling site not found and page not found with dynamic mass virtual hosting

    - by Rick Moynihan
    I have recently setup mass virtual hosting in Apache so that all we need to do is create a directory to create a new vhost. We're then also using wildcard DNS to map all subdomains to the server running our Apache instance. This works excellently, however I'm now having trouble configuring it to fail-over to an appropriate default/error-page when the vhost directory does not exist. The problem appears to be conflated between by my desire to handle the two error conditions: vhost not found i.e. there was no directory found matching the host supplied in the HTTP host header. I'd like this to display an appropriate site not found error page. The 404 page not found condition of the vhost. Additionally I have a specialised "api" vhost in its own vhost block. I've tried a number of variations and none seem to exhibit the behaviour I want. Here's what I'm working with right now: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /var/www/site-not-found ServerName sitenotfound.mydomain.org ErrorDocument 500 /500.html ErrorDocument 404 /500.html </VirtualHost> <VirtualHost *:80> ServerName api.mydomain.org DocumentRoot /var/www/vhosts/api.mydomain.org/current # other directives, e.g. setting up passenger/rails etc... </VirtualHost> <VirtualHost *:80> # get the server name from the Host: header UseCanonicalName Off VirtualDocumentRoot /var/www/vhosts/%0/current # other directives ... e.g proxy passing to api etc... ErrorDocument 404 /404.html </VirtualHost> My understanding is that the first vhost block is used as the default, so I have this here as my catch all site. Next I have my API vhost, and then finally my mass vhost block. So for a domain that doesn't match the first two ServerName's and has no corresponding directory in /var/www/vhosts/ I'd expect it to fall-over to the first vhost, however with this setup, all domains resolve to my default site-not-found. Why is this? By putting the mass-vhost block first, I can get the mass-vhosts to resolve properly, but not my site-not-found vhost... and in this case I can't seem to find a way to distinguish between a page-level 404 in the vhost, and the case where the VirtualDocumentRoot fails to find a vhost directory (this appears to use the 404 also). Any help out of this bind is much appreciated!

    Read the article

  • Passenger 2.2.4, nginx 0.7.61 and SSL

    - by boompa
    Has anyone had any luck configuring Passenger and nginx with SSL? I've spent hours trying to get this configuration working as I'd like, using what few resources there are out there on the net, and I can't get any of the supposedly forwarded headers to show up in the Rails controller. For example, with a conf file of the following (and multiple variations thereof): server { listen 3000; server_name .example.com; root /Users/website/public; passenger_enabled on; rails_env development; } server { listen 3443; root /Users/website/public; rails_env development; passenger_enabled on; ssl on; #ssl_verify_client on; ssl_certificate /Users/website/ssl/server.crt; ssl_certificate_key /Users/website/ssl/server.key; #ssl_client_certificate /Users/website/ssl/CA.crt; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_set_header X-SSL-Subject $ssl_client_s_dn; #proxy_set_header X-SSL-Issuer $ssl_client_i_dn; proxy_redirect off; proxy_max_temp_file_size 0; } and Rails code in the controller like this: request.headers.each { |k, v| RAILS_DEFAULT_LOGGER.error "Header #{k} Val #{v}" } other headers appear, but not those set in nginx, e.g.: Header rack.multithread Val false Header REQUEST_URI Val /login/new Header REMOTE_PORT Val 64021 Header rack.multiprocess Val true Header PASSENGER_USE_GLOBAL_QUEUE Val false Header PASSENGER_APP_TYPE Val rails Header SCGI Val 1 Header SERVER_PORT Val 3443 Header HTTP_ACCEPT_CHARSET Val ISO-8859-1,utf-8;q=0.7,*;q=0.7 Header rack.request.query_hash Val Header DOCUMENT_ROOT Val /Users/website/public I've even gone so far as to modify Passenger's abstract_request_handler's main_loop method, i.e., headers, input = parse_request(client) if headers if headers[REQUEST_METHOD] == PING process_ping(headers, input, client) else headers.each { |h,v| log.unknown "abstract_request_handler: #{h} = #{v}" } process_request(headers, input, client) end end only to find that the supposedly added headers do not exist there either: abstract_request_handler: HTTP_KEEP_ALIVE = 300 abstract_request_handler: HTTP_USER_AGENT = Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1) Gecko/20090624 Firefox/3.5 abstract_request_handler: PASSENGER_SPAWN_METHOD = smart-lv2 abstract_request_handler: CONTENT_LENGTH = 0 abstract_request_handler: HTTP_IF_NONE_MATCH = "b6e8b9afbc1110ee3bf0c87e119252ad" abstract_request_handler: HTTP_ACCEPT_LANGUAGE = en-us,en;q=0.5 abstract_request_handler: SERVER_PROTOCOL = HTTP/1.1 abstract_request_handler: HTTPS = on abstract_request_handler: REMOTE_ADDR = 127.0.0.1 abstract_request_handler: SERVER_SOFTWARE = nginx/0.7.61 abstract_request_handler: SERVER_ADDR = 127.0.0.1 abstract_request_handler: SCRIPT_NAME = abstract_request_handler: PASSENGER_ENVIRONMENT = development abstract_request_handler: REMOTE_PORT = 64021 abstract_request_handler: REQUEST_URI = /login/new abstract_request_handler: HTTP_ACCEPT_CHARSET = ISO-8859-1,utf-8;q=0.7,*;q=0.7 abstract_request_handler: SERVER_PORT = 3443 abstract_request_handler: SCGI = 1 abstract_request_handler: PASSENGER_APP_TYPE = rails abstract_request_handler: PASSENGER_USE_GLOBAL_QUEUE = false I'm tired of banging my head against the wall, so I'd truly appreciate any help I can get!

    Read the article

  • Looking for a new backup solution to replace dying tape drive

    - by E3 Group
    We're running Windows Server 2003 SBS and another machine with Server 2003 Standard on it. The SBS server is about 7 years old running pretty much 24/7 - a HP server of some description. We have an Ultrium 448 cycling LTO2 400GB tapes daily and incrementally backing up approximately 100gb worth of data (20gb C:\ and system state, 40gb exchange, 40gb database for some crap marketing software) on BackupExec 10D. As of 5 months ago, the backups have been consistently failing with IO errors, bad reads and some write errors. When I say consistent, I mean every time and we haven't had a proper backup for the entire 5 months - So if the server explodes tomorrow, 7 years worth of data will just cease to exist. I've only just recently rejoined the company and am looking at rectifying the more concerning problems, so the first thing I did was try a backup to an USB2.0 external drive. It was excruciatingly slow. In fact it was so slow it took 40 hours and it still wasn't finished. I ended up cancelling it and reconfiguring the selections again to reduce file size. This, however, isn't a permanent solution. I concluded that the IO error was either from a faulty tape drive (which has a tape stuck in there right now and not coming out) or from a dying SCSI controller. Neither of them are good news and both are extremely expensive to fix. I'm operating on an extremely low budget so have been looking at outsourcing the backups. A company in Sydney (where I'm located) offer incremental online backups via a NAS. It costs almost double a new tape drive but offers monthly repayments which will let us get through times when cash flow is minimal. It seems like a sweet deal but it is still a little bit pricey. So I'm looking for a cheaper, yet reliable solution. Maybe some in-house NAS or something offsite? The idea is to avoid using tapes. Are there any recommendations for rectifying my current situation? Or are tapes the only way to go? I'm concerned that the server will die one day in the near future and I must be able to restore it to another server with different hardware.

    Read the article

  • Postfix: How to apply header_checks only for specific Domains?

    - by Lukas
    Basically what I want to do is rewriting the From: Header, using header_checks, but only if the mail goes to a certain domain. The problem with header_check is, that I can't check for a combination of To: and From: Headers. Now I was wondering if it was possible to use the header_checks in combination with smtpd_restriction_classes or something similar. I've found a lot information about header_checks and multiple header fields, when searching the net. All of them basically telling me, that one can't combine two header for checking. But I didn't find any information if it was possible to only do a header check if a condition (eg. mail goes to example.com) was met. Edit: While doing some more Research I've found the following article which suggests to add a Service in postfix master.cf, use a transportmap to pass mails for the Domain to that service and have a separate header_check defined with -o. The thing is that I can't get it to work... What I did so far is adding the Service to the master.cf: example unix - - n - - smtpd -o header_checks=regexp:/etc/postfix/check_headers_example Adding the followin Line to the transportmap: example.com example: Last but not least I have two regexp-files for header checks, one for the newly added service, and one to redirect answers to the rewritten domain. check_headers_example: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 Obviously if someone answers, the mail would go to nirvana, so I have the following check_headers defined in the main postfix process: /To:(.*)<(.*)@mydomain.example.com>(.*)/ REDIRECT [email protected]$2 Somehow the Transport is ignored. Any help is appreciated. Edit 2: I'm still stuck... I did try the following: smtpd_restriction_classes = header_rewrite header_rewrite = regexp:/etc/postfix/rewrite_headers_domain smtpd_recipient_restrictions = (some checks) check_recipient_access hash:/etc/postfix/rewrite_table, (more checks) In the rewrite_table the following entries exist: /From:(.*)@mydomain.ain>(.*)/ REPLACE From:[email protected]>$2 All it gets me is a NOQUEUE: reject: 451 4.3.5 Server configuration error. I couldn't find any resources on how you would do that but some people saying it wasn't possible. Edit 3: The reason I asked this question was, that we have a customer (lets say customer.com) who uses some aliases that will forward mail to a domain, let's say example.com. The mailserver at example.com does not accept any mail from an external server that come from a sender @example.com. So all mails that are written from example.com to [email protected] will be rejected in the end. An exception on example.com's mailserver is not possible. We didn't really solve this problem, but will try to work around it by using lists (mailman) instead of aliases. This is not really nice though, nor a real solution. I'd appreciate all suggestions how this could be done in a proper way.

    Read the article

  • Snow Leopard can see Windows shares in Finder but can't connect

    - by Randy Miller
    I have an iMac with the latest version of Snow Leopard on it. I have a NAS drive and a Windows machine that both show up in the Finder's 'Shared' section. However, if I click on them, Finder says "Connection Failed". Clicking on 'Connect As...' gives an error dialog that says "The server 'blah' may not exist or it is unavailable at this time." Points of interest: All machines are receiving their IP/DNS info from the router using DHCP. I have a Mac Mini on the same network that connects to the NAS drive and windows machine perfectly with no config (i.e. worked out of the box). Both Macs are on the same version of Snow Leopard. There is no password required to access the NAS share. I've never setup a WINS server on any machines and all machines are using 'workgroup' by default. I've tried putting "workgroup" in the Mac's workgroup entry and have tried leaving it blank, neither solves the problem. Here are some things I have tried: Finder-Connect To Server: smb:///share. This works, but by name does not. Terminal-mount_smbfs //@/share share. This also works by ip, but not be name, resulting in "mount_smbfs: server connection failed: No route to host". If I put the IP address of the NAS in the WINS server entry in the Mac's network setup, I can connect by name. It obviously seems to be a name resolution error, but I can't figure out why. The only thing that has changed since it used to work is that I got a new router that now gives out DHCP (all machines are dhcp clients) addresses of 192.168.x.x, but used to be 10.0.x.x. I've grep'd through everything that might have saved that old address, but can't find anything. It's also worth noting that the second Mac (the one that connects successfully) was added to the network after the router change. Please let me know if there are additional points of information needed to troubleshoot this further. Thanks, Randy

    Read the article

  • Problems migrating an EBS backed instance over AWS Regions

    - by gshankar
    Note: I asked this question on the EC2 forums too but haven't received any love there. Hopefully the ServerFault community will be more awesome. The new AWS Sydney region opening up is something that we've been waiting for for a long time but I'm having a lot of trouble migrating our instances over from N. California. I managed to migrate 1 instance over using CloudyScripts to move a snapshot and then firing up a new instance in the Sydney region. This was a very new instance so both the source and destination were running on a Ubuntu 12.04 LTS server and I had no issues there. However, the rest of our instances are all Ubuntu 10.04 LTS and with these, I'm having a lot of problems. I've tried following: 1- following the AWS whitepaper on moving instances which was given to us at the recent Customer Appreciation Day in Sydney where the new region was launched. The problem with this approach was with the last step (Step 19) here you register the image: ec2-register -s snap-0f62ec3f -n "Wombat" -d "migrated Wombat" --region ap-southeast-2 -a x86_64 --kernel aki-937e2ed6 --block-device-mapping "/dev/sdk=ephemeral0" I keep getting this error: Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' does not exist which I think is due to the kernel_id not existing in the Sydney region? 2- Using CloudyScripts to move a snapshot and then creating a new volume and attaching to a new instance in Sydney This results in the instance just hanging on boot and failing the status checks. I can't SSH in or look at the server log I suspect that my issue is with finding the right kernel_id for the volume in the new region. However I can't seem to work out how to go about finding this kernel_id, the ones I've tried (from the original instance) don't result in the Client.InvalidAMIID.NotFound: The AMI ID 'ami-937e2ed6' error and any other kernel_id just won't boot. I've tried both 12.04 and 10.04 versions of Ubuntu. Nothing seems to work, I've been banging my head against a wall for a while now, please help! New (broken) instance i-a1acda9b ami-9b8611a1 aki-31990e0b Source instance i-08a6664e ami-b37e2ef6 aki-937e2ed6 p.s. I also tried following this guide on updating my Ubuntu LTS version to 12.04 before doing the migration but it didn't seem to work either, still getting stuck on updating the kernel_id http://ubuntu-smoser.blogspot.com.au/2010/04/upgrading-ebs-instance.html

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • configuring slime in emacs

    - by CodeKingPlusPlus
    I am in the process of configuring slime for emacs. So far I have read about basic functionality for common lisp such as C-c C-q which invokes the command slime-close-parens-at-point which places the proper number of parens where your mouse is. Another command that seemed cool was invoked by C-c C-c and it would pass the code you are editing in a buffer to the REPL, and "compile" it. Why won't these commands work for me? Anyway, I have downloaded slime via M-x list-packages and do not seem to have this functionality (C-h w and then any of these commands tells me that these commands do note exist). So, I saw a bunch of other slime extensions such as slime-repl', 'slime-fuzzy' and 'hippie-expand-slime'. So I again usedM-x list-packages` and downloaded them. Still I did not have these commands. Here is the content of my emacs file relevant to slime: ;;;Common Lisp and Slime (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-20130626.1151") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-repl-201000404") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/hippie-expand-slime-20130226.1656") (add-to-list 'load-path "/home/s2s2/.emacs.d/elpa/slime-fuzzy-20100404") (require 'slime) (setq slime-lisp-implementations `((sbcl ("/usr/bin/sbcl")) (ecl ("/usr/bin/ecl")) (clisp ("/usr/bin/clisp" "-q -I")))) (require 'slime-repl) (require 'slime-fuzzy) (require 'hippie-expand-slime) When I execute M-x slime I get the following message in the inferior-lisp buffer where I can execute common lisp code (however, shouldn't this be the slime-repl since I required it?): STYLE-WARNING: redefining EMACS-INSPECT (#<BUILT-IN-CLASS T>) in DEFMETHOD STYLE-WARNING: Implicitly creating new generic function STREAM-READ-CHAR-WILL-HANG-P. WARNING: These Swank interfaces are unimplemented: (DISASSEMBLE-FRAME SLDB-BREAK-AT-START SLDB-BREAK-ON-RETURN) ;; Swank started at port: 46533. Then a slime-error buffer is created with the contents: Invalid protocol message: Symbol "CREATE-REPL" not found in the SWANK package. Line: 1, Column: 28, File-Position: 28 Stream: #<SB-IMPL::STRING-INPUT-STREAM {10056B9C33}> (:emacs-rex (swank:create-repl nil) "COMMON-LISP-USER" t 5) How should I modify my emacs file to give me the functionality of those commands? In my emacs file am I not loading the necessary files? Do I need to install an additional package? If you need more information let me know! All help is much appreciated!

    Read the article

  • Outlook refuses to connect to Exchange

    - by wfaulk
    Outlook 2007 under Windows XP connecting to Exchange 2003 SP2: when started, it flips back and forth between "Connecting to Exchange Server" and "Disconnected" three or four times, then gives up and stays disconnected. I tried deleting the ost file (which was nearly 2GB), turning Cached mode on and off, recreating the account inside the Mail control panel, changing the account to use HTTP, and probably some other things. None of it seemed to make any difference, until … After fiddling with it for a while, I got this absurd error message dialog at startup, and it exits after I click OK: Cannot start Microsoft Office Outlook. Cannot open the Outlook window. The set of folders cannot be opened. Microsoft Exchange is not available. Either there are network problems or the Exchange server is down for maintenance. (I'm not sure if I can even trust that message. It's so long, it just feels like a random offset into Outlook's stack of error messages.) Either way, the Exchange server is available to everyone else, and is available via OWA from that computer. I ran Process Explorer against Outlook and it showed 5 or so ESTABLISHED connections to our Exchange server, plus listening on two UDP ports, and two CLOSE_WAIT connections to localhost. If I managed to look at Outlook's IP connections while it was doing its Connecting/Disconnected dance, it had a huge number of connections open to the Exchange server. It more than filled ProcExp's dialog box; I'm guessing at least 20, probably more. The only other odd thing is that our network admin at some point added a wildcard DNS record to the domain name that we use for email, and now Outlook will sometimes (always?) start by complaining about autodiscover.example.com's SSL certificate. There is a web server there, but it doesn't have any sort of email autodiscover anything on it. It doesn't make any difference if I click "OK" or "Cancel" (or whatever the buttons are). I also added a bogus entry for the hostname to Windows' hosts file, pointing it at 127.0.0.2, and it stopped complaining about the certificate. (The CLOSE_WAIT sockets above were from before I made this change, and went away after.) I don't think this is related, as the same problem should exist for everyone, but it might be. This is the second time this user has had this problem. The first time, I never found a solution other than reinstalling Outlook. Now that it's a pattern, I'd like to find a permanent solution, rather than assume it's a random glitch.

    Read the article

  • WinPE, Startnet.CMD and passing variables to second batch file not working

    - by user140892
    I don't know scripting or PowerShell (yes I need to learn something). I'm not an expert batch file maker either. I have a WinPE flash drive which I used to deploy OS images. I have the WIM, drivers and anything needed else outside the WinPE environment to ensure that Updates, changes are easier for me to make. I use the "STARTNET.CMD" batch file which is part of the WinPE. The reason to go through the letter drives is that the WinPE always gets the X letter drive assigned. The flash drive itself can receive a random letter which always changes. My deployment menu is located on the flash drive it self and not inside the WinPE. This is so that if I need to make a change I don't have to re-do the WinPE. I am able to locate the "menu.bat" batch file and launch it. I use a variable to capture the letter drive. I call the second batch file named "menu.bat" and pass the variable to it. When the second batch file loads, I believe that I am calling the variable correctly. If I break out of the batch file I can echo the variable and see the expected reply. The issue is that I can't use the variable to work with anything on the second batch file. In my test, I can get this to work over and over. When it runs from the real USB flash drive it does not work. I removed comments from the second batch file to make it smaller. My issue is that files below all get a message stating that the system cannot find the path specified. Diskpart Imagex.exe bcdboot.exe Why can't I get the varible to properly function when I try to using example "ImageX.exe"? Contents of the Startnet.cmd @echo off for %%p in (a b c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%p:\Tools\ set w=%%p Set execpatch=%w%\Tools\ call %w%:\Menu.bat \Tools\ Contents of the Menu.BAT @echo off set SecondPath=%1 cls :Start cls Echo. Echo.============================================================== Echo. Windows 7 64 Bit Ent Basic Desktops Echo.============================================================== Echo. Echo A. 790 Windows 7 - Basic Echo. Echo. Echo I. Exit Echo. Echo. set /p choice=Choose your option = if not '%choice%'=='' set choice=%choice:~0,1% if '%choice%'=='a' goto 790_Windows_7_Basic echo "%choice%" is not a valid (answer/command) echo. goto start :790_Windows_7_Basic REM DISKPART /s %SecondPath%BatchFiles\Make-Partition.txt %SecondPath%imagex.exe /apply %SecondPath%Images\Win7-64b-Ent-Basic-SysPreped.wim 1 o:\ /verify %SecondPath%bcdboot.exe o:\Windows /s S: Copy %SecondPath%Unattended\unattend.XML o:\Windows\System32\sysprep\unattend.XML /y xcopy %SecondPath%Drivers\790\*.* o:\Windows\INF\790\ /E /Q /Y MD o:\Windows\Setup\Scripts\ Copy %SecondPath%BatchFiles\SetupComplete.cmd o:\Windows\Setup\Scripts\ /y Goto Done :Done Exit

    Read the article

  • Tomcat and IIS 7 both on different ip's and different ports

    - by n00b
    I have Tomcat and IIS 7 installed together on a Windows 2008 server. The machine has two IPs (134.133.1.1 and 134.133.2.2). I want Tomcat to handle 134.133.1.1, on port 80, and IIS to handle both 134.133.2.2, on port 80 AND 134.133.1.1, on port 443, but can't seem to get the last two together (I can get one or the other by themselves on IIS, along with the first IP address on Tomcat). I have configured Tomcat to successfully listen to ip 134.133.1.1, on port 80 with this configuration; <Connector port="80" protocol="HTTP/1.1" address="134.133.1.1" connectionTimeout="20000" redirectPort="8443" /> I also have a site configured in IIS bound to ip 134.133.1.1, on port 443 (SSL). When I turn on IIS, after Tomcat, I can reach both 134.133.1.1:80 (Tomcat) and 134.133.1.1:443 (IIS) successfully (as desired). The problem now comes when I want to introduce a new site via IIS, at the new ip address. In IIS I have setup a new site at IP 134.133.2.2, port 80. I can not start the site. The event log shows this error; Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. The data field contains the error number. I think this is because IIS 7 tries to listen to port 80 on all IPs, and it cant because Tomcat is taking port 80 for 134.133.1.1. From reading, the resolution is to specify the IP address you want IIS to bind on port 80. The problem is, when I add 134.133.2.2 to the iplisten list, then I get a 404 when I try navigating to 134.133.1.1:443. I assume this is because IIS is no longer listening to ANY port on 134.133.1.1. How do I resolve this such that IIS will return both sites? EDIT: Per request my IIS binding for site A is 134.133.2.2 on port 80 (http) and 134.133.2.2 on port 443. For site B in IIS, the binding is 134.133.1.1 on port 443 (https). Note the IPs in this example are just for example purposes, but consistent with my setup.

    Read the article

  • unable to join domain using virtualbox

    - by FreshPrinceOfSO
    I'm in the process of setting up a VM environment for a MS certification exam (70-462). Following the training kit's instructions, I've set up a domain controller (DC) and two members (SQL-A, SQL-B) thus far. I can't figure out why I can't join the domain. DC IPv4 Address . . . : 10.10.10.10(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : ::1 127.0.0.1 SQL-A IPv4 Address . . . : 10.10.10.20(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 SQL-B IPv4 Address . . . : 10.10.10.30(Preferred) Subnet Mask. . . . : 255.0.0.0 DNS Servers. . . . : 10.10.10.10 I've read how to do networking between virtual machines in virtualbox and the documentation. After trying various network adapter configurations, I can't get them to communicate in order to have the two members join the domain. When I ping from .30 to .10, I get: ping 10.10.10.10 Pinging 10.10.10.10 with 32 bytes of data: Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Reply from 10.10.10.20: Destination host unreachable. Trying to join the domain: netdom join SQL-A /domain:contso.com The specified domain either does not exist or could not be contacted. The command failed to complete successfully. Within VirtualBox, I've tried the following combinations for network adapter: Attached to - Promiscuous Mode ------------------------------- NAT Bridged Adapter - Deny Bridged Adapter - Allow VMs Bridged Adapter - Allow All Internal Network - Deny Internal Network - Allow VMs Internal Network - Allow All Host-only Adapter - Deny Host-only Adapter - Allow VMs Host-only Adapter - Allow All Edit ipconfig /all of DC ipconfig /all of SQL-A

    Read the article

  • Vim configuration slow in Terminal & iTerm2 but not in MacVim

    - by Jey Balachandran
    Ideally, I want to use Vim from Terminal or iTerm2. However, it becomes unbearably slow so I had to resort to using MacVim. There is nothing wrong with MacVim, however my workflow would be much smoother if I used only Terminal/iTerm2. When its slow Loading files, in particular Rails files takes about 1 - 1.5s. Removing rails.vim decreases this time to 0.5 - 1s. In MacVim this is instantaneous. Scrolling through the rows and columns via h, j, k, l. It progressively gets slower the longer I hold down the keys. Eventually, it starts jumping rows. I have my Key Repeat set to Fast and Delay Until Repeat set to Short. After 10 - 15 minutes of usage, using plugins such as ctrlp or Command-T gets very laggy. I'd type a letter, wait 2 - 3s, then type the next. My Setup 11" MacBook Air running Mac OS X Version 10.7.3 (1.6 Ghz Intel Core 2 Duo, 4 GB DDR3) My dotfiles. > vim --version VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Nov 16 2011 16:44:23) MacOS X (unix) version Included patches: 1-333 Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv -cscope +cursorbind +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap -lua +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra -perl +persistent_undo +postscript +printer +profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/local/Cellar/vim/7.3.333/share/vim" Compilation: /usr/bin/llvm-gcc -c -I. -Iproto -DHAVE_CONFIG_H -DMACOS_X_UNIX -no-cpp-precomp -O3 -march=core2 -msse4.1 -w -pipe -D_FORTIFY_SOURCE=1 Linking: /usr/bin/llvm-gcc -L. -L/usr/local/lib -o vim -lm -lncurses -liconv -framework Cocoa -framework Python -lruby I've tried running without any plugins or syntax highlighting. It opens files a lot faster but still not as fast as MacVim. But the other two problems still exist. Why is my vim configuration slow? How can I improve the speed of my vim configuration within Terminal or iTerm2?

    Read the article

  • How to change key mappings in Cygwin's Vim

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Edit 1: OK, it seems, that I got a first trace: The terminal sends for <C-Left> '^[[1;5D', and for right '^[[1;5C' (evaluated with the <C-V><C-Left> trick). If vim interprets this literally and discards the first characters, it explains the strange behaviour. Any ideas, how I could change this key mapping? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • How to find out Vim's currently mapped commandos

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • Event ID 9331 MSExchangeSA & Event ID 9335 MSExchangeSA

    - by George
    I get this two Exchange 2010 Global Address book related event IDs: Event ID 9331 MSExchangeSA OABGen encountered error 80004005 (internal ID 50101f1) accessing the public folder database while generating the offline address list for address list '/'. -\Default Offline Address List and Event ID 9335 MSExchangeSA OABGen encountered error 80004005 while cleaning the offline address list public folders under /o=xxxxx xxxx/cn=addrlists/cn=oabs/cn=Default Offline Address List. Please make sure the public folder database is mounted and replicas exist of the offline address list folders. No offline address lists have been generated. Please check the event log for more information. -\Default Offline Address List It is Exchange 2010 SP2 sitting on Windows 2008 enterprise edition. Essentially the issue is that the global address book is not being updated on Outlook clients. We are using Outlook 2007 and 2010. So far I have tried running the following command: Update-FileDistributionService -Identity ExchangeServer -Type "OAB" And I tried this solution as well: 1) Make sure the Microsoft Exchange System Attendant is running. It will be set to start automatically by default, but it doesn't. This is a known issue. Start this service manually. When running, you will not get an error when trying to update the GAL. 2) "Apply" any changes made to any address lists before the GAL will update Outlook properly. In Organization Configuration - Mailbox in EMC, view the properties of the Default Global Address Book in the Offline Address Book tab. In the properties window, select the Address Lists tab. This shows which address lists makes up the GAL. 3) Close the properties window and select the Address Lists tab in the Organization Configuration - Mailbox. Right-click each address list used by the Def GAL and click "Apply" (make sure the "Immediately" radio button is checked). 4) Last, go back to the Offline Address Book tab, right-click the GAL and select "Update". After a few send/receives in the Outlook clients, their Glogal Address List should update to show the latest changes. Neither one of those solutions helped. So I am not really sure what to do here. Also, I am aware of changing registry on each local computers, but it would be close to impossible as we have 8 offices in 3 different countries. Any suggestions? EDIT 7.XII.2012 @ 10.35 I forgot to mention that we did rebuild the address book and that didn't help.

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Problem deploying GWT application on apache and tomcat using mod_jk

    - by Colin
    I'm trying to deploy a GWT app on Apache using mod_jk connector. I have compiled the application and tested it on tomcat on the address localhost:8080/loginapp and it works ok. However when I deploy it to apache using mod_jk I get the starter page which gives me a login form but trying to login I get this error 404 Not Found Not Found The requested URL /loginapp/loginapp/login was not found on this server Looking at the apache log files i see this [Thu Jan 13 13:43:17 2011] [error] [client 127.0.0.1] client denied by server configuration: /usr/local/tomcat/webapps/loginapp/WEB-INF/ [Thu Jan 13 13:43:26 2011] [error] [client 127.0.0.1] File does not exist: /usr/local/tomcat/webapps/loginapp/loginapp/login, referer: http://localhost/loginapp/LoginApp.html The mod_jk configurations on my apache2.conf file are as follows LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/mod_jk.log JkLogLevel info JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat "%w %V %T" <IfModule mod_jk.c> Alias /loginapp "/usr/local/tomcat/webapps/loginapp/" <Directory "/usr/local/tomcat/webapps/loginapp/"> Options Indexes +FollowSymLinks AllowOverride None Allow from all </Directory> <Location /*/WEB-INF/*> AllowOverride None deny from all </Location> JkMount /loginapp/*.html loginapp My workers.properties file is as follows workers.tomcat_home=/usr/local/tomcat workers.java_home=/usr/lib/jvm/java-6-sun ps=/ worker.list=loginapp worker.loginapp.type=ajp13 worker.loginapp.host=localhost worker.loginapp.port=8009 worker.loginapp.cachesize=10 worker.loginapp.cache_timeout=600 worker.loginapp.socket_keepalive=1 worker.loginapp.recycle_timeout=300 worker.loginapp.lbfactor=1 And this is my servlet mappings for my app on the application's web.xml <servlet> <servlet-name>loginServlet</servlet-name> <servlet-class>com.example.loginapp.server.LoginServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>loginServlet</servlet-name> <url-pattern>/loginapp/login</url-pattern> </servlet-mapping> <servlet> <servlet-name>myAppServlet</servlet-name> <servlet-class>com.example.loginapp.server.MyAppServiceImpl</servlet-class> </servlet> <servlet-mapping> <servlet-name>myAppServlet</servlet-name> <url-pattern>/loginapp/mapdata</url-pattern> </servlet-mapping> Ive tried everything and it seems to still elude me. Even tried changing the deny from all directive on the WEBINF folder to allow from all and still it doesnt work. Maybe im missing something. Any help will be highly appreciated.

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >