Search Results

Search found 24865 results on 995 pages for 'default route'.

Page 776/995 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • Apache 2.2.21 installation on Linux 6 but got error while accessing in browser

    - by JRanjan
    I am very new to linux. I have install apache 2.2.21 on linux 6 platform. While i am using ./apachectl start or ./apachectl -k start command it shows that apache is started. But while i am trying to to access apache default page in any browser using " http://:8080 " it shows page cannot be displayed. Can any one help me on this issue ??????? Plz its urgent.. I am also enclosing the error_log file as below: error_log file [Thu Nov 24 08:57:23 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:45:58 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:46:12 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:46:12 2011] [notice] Digest: done [Fri Nov 25 01:46:13 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:54:58 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:55:10 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:55:10 2011] [notice] Digest: done [Fri Nov 25 01:55:11 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 01:58:10 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 01:59:41 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 01:59:41 2011] [notice] Digest: done [Fri Nov 25 01:59:42 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 03:23:14 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 03:27:36 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 03:27:36 2011] [notice] Digest: done [Fri Nov 25 03:27:37 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 08:52:27 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 08:52:43 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 08:52:43 2011] [notice] Digest: done [Fri Nov 25 08:52:44 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Fri Nov 25 09:21:39 2011] [notice] caught SIGTERM, shutting down [Fri Nov 25 09:21:57 2011] [notice] Digest: generating secret for digest authentication ... [Fri Nov 25 09:21:57 2011] [notice] Digest: done [Fri Nov 25 09:21:58 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations [Mon Nov 28 01:06:58 2011] [notice] caught SIGTERM, shutting down [Mon Nov 28 01:07:58 2011] [notice] Digest: generating secret for digest authentication ... [Mon Nov 28 01:07:58 2011] [notice] Digest: done [Mon Nov 28 01:07:59 2011] [notice] Apache/2.2.21 (Unix) DAV/2 configured -- resuming normal operations

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • why i failed to configure openvpn, now i am an ordinary user, how to deal with this issue?

    - by hugemeow
    checking tap-windows.h presence... no checking for tap-windows.h... no checking whether TUNSETPERSIST is declared... yes checking for setcon in -lselinux... yes checking for pam_start in -lpam... no checking for OPENSSL_CRYPTO... yes checking for OPENSSL_SSL... yes checking for EVP_CIPHER_CTX_set_key_length... yes checking for ENGINE_load_builtin_engines... yes checking for ENGINE_register_all_complete... yes checking for ENGINE_cleanup... yes checking for ssl_init in -lpolarssl... no checking for aes_crypt_cbc in -lpolarssl... no checking for lzo1x_1_15_compress in -llzo2... no checking for lzo1x_1_15_compress in -llzo... no checking for PKCS11_HELPER... no checking git checkout... yes configure: error: libpam required but missing [mirror@innov openvpn]$ ./configure --help | grep libpam --enable-pam-dlopen dlopen libpam [default=no] C compiler flags for libpam LIBPAM_LIBS linker flags for libpam [mirror@xxx openvpn]$ ./configure --prefix=/home/mirror/build/ins/ins_vpn --disable-lzo error: libpam required but missing i just have no privilege to install the package named libpam, so can i build libpam and install it in home directory, then build openvpn based on it?

    Read the article

  • Can I use a single SSLCertificateFile for all my VirtualHosts instead of creating one of it for each VirtualHost?

    - by user65567
    I have many Apache VirtualHosts for each of which I use a dedicated SSLCertificateFile. This is an configuration example of a VirtualHost: <VirtualHost *:443> ServerName subdomain.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/users/public" RackEnv development <Directory "/Users/<my_user_name>/Sites/users/publ`enter code here`ic"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on #Self Signed certificates SSLCertificateFile /private/etc/apache2/ssl/server.crt SSLCertificateKeyFile /private/etc/apache2/ssl/server.key SSLCertificateChainFile /private/etc/apache2/ssl/ca.crt </VirtualHost> Since I am maintaining more Ruby on Rails applications using Passenger Preference Pane, this is a part of the apache2 httpd.conf file: <IfModule passenger_module> NameVirtualHost *:80 <VirtualHost *:80> ServerName _default_ </VirtualHost> Include /private/etc/apache2/passenger_pane_vhosts/*.conf </IfModule> Can I use a single SSLCertificateFile for all my VirtualHosts (I have heard of wildcards) instead of creating one of it for each VirtualHost? If so, how can I change the files listed above?

    Read the article

  • Cannot browse remote networks even with WINS configured

    - by paradroid
    As the NetBIOS protocol acts on Layer 2 and so is not routable, In order to enable network browsing of remote networks, WINS has been installed and configured on two domain controllers, both of which are on different networks. The WINS servers seem to be replicating with eachother, and each has 127.0.0.1 set as the Primary WINS Server in each of their LAN interface properties, with nothing entered for Secondary WINS Server. The DC which holds the PDC Emulator FSMO role has the Computer Browser service running and set to Auto start, and it has the WINS/NBT node type network setting at 0x8 (H-node - Hybrid node). Remote network browsing does not work. Is the WINS/NBT node type correct for this scenario? The reason why I think it may not be the right one is because I set the DHCP Server's 046 WINS/NBT node type option to 0x8 as well, after which the DHCP clients started to disappear from the Network folders. When that option is not set, does it default to B-node (Broadcast node)? Or could it be a problem with the WINS servers setup?

    Read the article

  • Is StoreJet Transcend (0x2329) an Advanced Format drive?

    - by Graham Perrin
    I use a 640 GB StoreJet Transcend (0x2329) with ZEVO Community Edition 1.1.1 on OS X 10.8.2. Question Is this drive Advanced Format? Background I submitted a request for technical support to Transcend but the first response was gibberish so I don't expect a reasonable follow-up. Models at http://www.transcend-info.com/Products/CatList.asp?LangNo=0&ModNo=293 are similar but different sizes (not 640 GB). Mine is probably 25M2 (TS640GSJ25M2): Unless I'm missing something, nothing currently in the Transcend support area tells me whether the drive is Advanced Format. From System Information in OS X 10.8.2: StoreJet Transcend: Capacity: 640.14 GB (640,135,028,736 bytes) Removable Media: Yes Detachable Drive: Yes BSD Name: disk3 Product ID: 0x2329 Vendor ID: 0x152d (JMicron Technology Corp.) Version: 0.00 Serial Number: 322549FBA004 Speed: Up to 480 Mb/sec Manufacturer: JMicron History for the ZFS pool shows creation in March 2012 –  macbookpro08-centrim:~ gjp22$ zpool history zhandy | grep create 2012-03-14.17:29:37 zpool create -f -O compression=off -O copies=1 -O casesensitivity=insensitive -O snapdir=visible zhandy /dev/dsk/GPTE_1928482A-7FE4-482D-B692-3EC6B03159BA 2012-06-22.15:51:16 zfs create zhandy/Pocket Time Machine At that time I almost certainly used ZEVO Setup Assistant to create the pool. macbookpro08-centrim:~ gjp22$ zpool get ashift zhandy NAME PROPERTY VALUE SOURCE zhandy ashift 0 default If I discover that the drive is Advanced Format, a different ashift value will be appropriate.

    Read the article

  • How should I configure my Active Directory servers so that if one goes down, users are not kicked off SQL?

    - by Matty Brown
    Today, we shut down one of our Active Directory servers during office hours to check the loading on a UPS. Since all the server did was provide Active Directory in a separate building incase the main building caught fire, or whatever, we didn't think it would have any effect on our users. Seconds after the server was shut down, we had a dozen phone calls from users experiencing this issue:- [Microsoft SQL Server Login] SQLState: '28000' [Microsoft][ODBC SQL Server Driver][SQL Server]Login failed. The login is from an untrusted domain and cannot be used with authentication. Once we realized what had happened, we quickly rebooted the down Active Directory server. Problem solved. But why did this happen. And what if one day a server has a breakdown and is offline for hours, or days? Shouldn't the other Active Directory servers in the domain service authentication requests without disruption to users? We have 3 Windows Server 2003 Standard servers running Active Directory as Domain Controllers with Global Catalogs, all physically located on the same network at Gigabit speeds. I believe the domain was originally Windows Server 2000, or maybe even NT 4.0. Could the issue be to down to old Group Policies inherited from these old server OS's, or some default setting in Active Directory that needs changing?

    Read the article

  • Dual Monitor setup issues between laptop and external led monitor

    - by Julian
    I have two challenges. Monitor will not connect to laptop through HDMI Watching HD video content causes the laptop to sometimes turn off expecially when I'm streaming from tekzilla.com Setup. I got my new HP 2311x LED LCD monitor this week and I have it running as the main monitor extended by the 15 inch screen on my Dell Studio 1558. Right now I have to connect the external monitor through VGA. For the HDMI connection issue. I suspect that either the appropriate drivers are not installed because I don't see any hdmi device in the device manager. I've checked and I don't see any hdmi specific drivers listed online. For the shut down issue, I suspect the laptop might be overheating. Not sure why it would. It never did that while I watched movies on my laptop's default screen. My Laptop Configuration: 15" led lcd screen at 1366 x 768 intel i5 processor integrated graphics card 4 GB DDR3 RAM 500 GB hard drive I've tried everything from switching the source on my monitor to hdmi to start up combinations and nothing has worked. What could be the issue and how do I solve it?

    Read the article

  • Remote management interface for managing ip6tables (or an alternative firewall)

    - by Matthew Iselin
    I'm working with IPv6 and have run into an issue configuring ip6tables on our main router in order to control what can come into the network. A default DROP rule in the FORWARD section has worked well (obviously leaving ESTABLISHED,RELATED as ACCEPT) to keep internal clients' open ports from being accessed. However, running an ip6tables command for every little change is unwieldy. Whilst we are able to continue creating rules manually, I'm wondering if there's some sort of management interface we could use to create the rules quickly and easily. We're looking to be able to save time working on our firewall as well as providing a simple method for modifying rules for those who will eventually replace us. I know webmin (heavily locked down on our network, naturally) has support for modifying iptables rules, but seemingly no support for ip6tables. Something similar would be fantastic. Alternatively, suggestions for a firewall solution apart from iptables/ip6tables which can be managed remotely wouldn't be out of order. A web interface for management is certainly preferable, even if it is just a wrapper with shiny buttons over the raw config files.

    Read the article

  • Outlook sends uninvited images as attachments

    - by serhio
    Outlook is always sending along 2 pictures (image001.png and image002.gif) with my mails. Even if I don't attach any images. How do I fix this? Thanks. EDIT - SIGNATURE The HTML of my signature: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML xmlns:o = "urn:schemas-microsoft-com:office:office"><HEAD><TITLE>Company default Signature</TITLE> <META content="text/html; charset=windows-1252" http-equiv=Content-Type> <META name=GENERATOR content="MSHTML 8.00.6001.18854"></HEAD> <BODY> <DIV align=left><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"> <P class=MsoNormal align=left><BR>Cordialement,&nbsp;</SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt"><o:p>&nbsp;</o:p></SPAN></FONT></P> <P class=MsoNormal><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">Firstname LASTNAME<BR></SPAN></FONT><FONT color=navy size=2 face=Arial><SPAN style="FONT-FAMILY: Arial; COLOR: navy; FONT-SIZE: 10pt">COMPANY Name<o:p></o:p></SPAN></FONT></P></DIV></BODY></HTML> EDIT 2 - STATIONERY I don't use stationery format: EDIT 3 - HTML If I delete the signature (Ctrl+A, Del) in the HTML mode the images always appear. If I use the signature in text only mode, the images disappears...

    Read the article

  • Cisco ASA dropping IPsec VPN between istself and CentOS server

    - by sebelk
    Currently we're trying to set up an IPsec VPN between a Cisco ASA Version 8.0(4) and a CentOS Linux server. The tunnel comes up successfully, but for some reason that we can't figure out, the firewall is dropping packets from the VPN. The IPsec settings in the ASA sre as follows: crypto ipsec transform-set up-transform-set esp-3des esp-md5-hmac crypto ipsec transform-set up-transform-set2 esp-3des esp-sha-hmac crypto ipsec transform-set up-transform-set3 esp-aes esp-md5-hmac crypto ipsec transform-set up-transform-set4 esp-aes esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map linuxserver 10 match address filtro-encrypt-linuxserver crypto map linuxserver 10 set peer linuxserver crypto map linuxserver 10 set transform-set up-transform-set2 up-transform-set3 up-transform-set4 crypto map linuxserver 10 set security-association lifetime seconds 28800 crypto map linuxserver 10 set security-association lifetime kilobytes 4608000 crypto map linuxserver interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes hash sha group 2 lifetime 28800 crypto isakmp policy 2 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 crypto isakmp policy 3 authentication pre-share encryption aes-256 hash md5 group 2 lifetime 86400 crypto isakmp policy 4 authentication pre-share encryption aes-192 hash sha group 2 lifetime 86400 crypto isakmp policy 5 authentication pre-share encryption aes-192 hash md5 group 2 group-policy linuxserverip internal group-policy linuxserverip attributes vpn-filter value filtro-linuxserverip tunnel-group linuxserverip type ipsec-l2l tunnel-group linuxserverip general-attributes default-group-policy linuxserverip tunnel-group linuxserverip ipsec-attributes pre-shared-key * Does anyone know where the problem is and how to fix it?

    Read the article

  • smbclient timing out

    - by Sam Lee
    I am trying to set up a Samba share on a Centos machine. I want to connect to this server using smbclient on OS X. Here is what happens: > smbclient -L X.X.X.X timeout connecting to X.X.X.X:445 timeout connecting to X.X.X.X:139 Error connecting to X.X.X.X (Operation already in progress) Connection to X.X.X.X failed What could be going wrong? Here is my iptables dump on the Centos machine (the server): > iptables -L -n Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 REJECT all -- 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable ACCEPT all -- 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 ACCEPT icmp -- 0.0.0.0/0 0.0.0.0/0 icmp type 8 REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:3000 And finally, my smb.conf: [global] workgroup = workgroup security = SHARE load printers = No default service = global path = /home available = No encrypt passwords = yes [share] writeable = yes admin users = myusername path = /home/myhome/ force user = root valid users = myusername public = yes available = yes

    Read the article

  • Dovecot unable to perform mysql query

    - by NathanJ2012
    I have been following the ISPMail tutorials on workaround.org (the 2.9 Wheezy version) and thus far everything has been working fine. When I reached the step to "Testing email delivery" step I noticed a error about the query in the output log from /var/log/mail.log. May 14 06:48:59 mail postfix/pickup[17704]: EA4AD240A98: uid=0 from=<root> May 14 06:48:59 mail postfix/cleanup[17776]: EA4AD240A98: message-id=<[email protected]> May 14 06:48:59 mail postfix/qmgr[17706]: EA4AD240A98: from=<[email protected]>, size=429, nrcpt=1 (queue active) May 14 06:49:00 mail dovecot: auth-worker(17782): mysql(127.0.0.1): Connected to database mailserver May 14 06:49:00 mail dovecot: auth-worker(17782): Warning: mysql: Query failed, retrying: Table 'mailserver.users' doesn't exist May 14 06:49:00 mail dovecot: auth-worker(17782): Error: sql([email protected]): User query failed: Table 'mailserver.users' doesn't exist (using built-in default user_query: SELECT home, uid, gid FROM users WHERE username = '%n' AND domain = '%d') May 14 06:49:00 mail dovecot: lda([email protected]): msgid=<[email protected]>: saved mail to INBOX May 14 06:49:00 mail postfix/pipe[17780]: EA4AD240A98: to=<[email protected]>, relay=dovecot, delay=0.09, delays=0.03/0.01/0/0.06, dsn=2.0.0, status=sent (delivered via dovecot service) May 14 06:49:00 mail postfix/qmgr[17706]: EA4AD240A98: removed I found this rather interesting that it isn't finding the DB so I went back through and checked EVERY file that I touched that involved the DB (including the postfix cf files) and everything is correct so I am baffled at this point, but oddly enough it would seem the email still made it to the correct destination in /var/vmail/domain.com/. Should I be worried about this or am I missing something here? Since it is a message from dovecot it would be the query from dovecot-sql.conf.ext which I am including here driver = mysql connect = host=127.0.0.1 dbname=mailserver user=blocked password=***REMOVED*** default_pass_scheme = PLAIN-MD5 password_query = SELECT email as user, password FROM virtual_users WHERE email='%u';

    Read the article

  • Can't log in using second domain controller when first DC is unreachable

    - by rbeier
    Hi, We're a small web development company. Our domain has two DCs: a main one (BEEHIVE, 192.168.3.20) in the datacenter and a second one (SPHERE2, 10.0.66.19) in the office. The office is connected to the datacenter via a VPN. We recently had a brief network outage in the office. During this outage, we weren't able to access the domain from our office machines. I had hoped that they would fail over to the DC in the office, but that didn't happen. So I'm trying to figure out why. I'm not an expert on Active Directory so maybe I'm missing something obvious. Both domain controllers are running a DNS server. Each office workstation is configured to use the datacenter DC as its primary DNS server, and the office DC as its secondary: DNS Servers . . . . . . . . . . . : 192.168.3.20 10.0.66.19 Both DNS servers are working, and both domain controllers are working (at least, I can connect to them both using AD Users + Computers). Here are the SRV records that point to the domain controllers (I've changed the domain name but I've left the rest alone): C:\nslookup Default Server: beehive.ourcorp.com Address: 192.168.3.20 set type=srv _ldap._tcp.ourcorp.com Server: beehive.ourcorp.com Address: 192.168.3.20 _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = beehive.ourcorp.com _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = sphere2.ourcorp.com beehive.ourcorp.com internet address = 192.168.3.20 sphere2.ourcorp.com internet address = 10.0.66.19 Does anyone have any ideas? Thanks, Richard

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • Add and remove letterhead in Word document

    - by Daniel Wolf
    Our company has letterheaded paper (pre-printed paper with our logo on it). Whenever we send something out by mail, we print it on that paper. However, when we send the same document via email, we convert it to a PDF file. Now the problem is: when converting a Word document to PDF, it should contain the letterhead. When printing the same document on paper, it should not (or else the letterhead would be printed twice). Currently, we are using two different Word document templates - one with letterhead, one without. So whenever we want to add or remove the letterhead, we have to create a new document with the other template and copy and paste everything over. Nasty solution. What I'm looking for is some simple way to switch the letterhead on and off. What I've tried so far: Switching the template: There does not seem to be a simple way to switch the template for an existing document. Using a picture watermark: Our letterhead goes all the way to the border of the page. (No printer supports this, of course, but it is fine for export to PDF.) Apparently depending on the current default printer, Word will not allow a borderless watermark, instead shifting the image around. Using the page header: When editing the page header, I can insert pictures at arbitrary positions, which is great. However, I could not find a way (short of macros) to enable/disable just the pictures in the header. (The text should remain there.)

    Read the article

  • Configuring https access on HP A5120 Switch

    - by GerryEgan
    I am trying to configure HTTPS management on a HP a5120 switch running Version 5.20.99, Release 2215 and not having much luck. I have followed the manual by creating an SSL policy first and then enabling the HTTPS server with the SSL policy: ssl server-policy sslpol ip https ssl-server-policy sslpol ip https enable When I try and log onto the switch with Google Chrome I get the following error: Error 107 (net::ERR_SSL_PROTOCOL_ERROR): SSL protocol error. When I look this up I have found references to errors due to TLS being used in SSL. I can find no way to specify the SSL version in the server policy. The manual has a configuration example that uses MSCEP to retrieve a certificate but in Windows 2008 R2 that feature is only available in Enterprise and Datacentre editions which I don't have. I have SSH configured and it is using a locally generated certificate so I'm not sure if I can use that but I'd like to if possible. Has anybody been able to setup HTTPS management on HP A series switches without MSCEP? Any and all help appreciated! here is a copy of my config with the interfaces removed: version 5.20.99, Release 2215 # sysname MYSYSNAME # irf domain 10 irf mac-address persistent timer irf auto-update enable undo irf link-delay # domain default enable system # telnet server enable # vlan 1 # vlan 100 description Management # radius scheme system primary authentication 127.0.0.1 1645 primary accounting 127.0.0.1 1646 user-name-format without-domain # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher authorization-attribute level 3 service-type ssh telnet terminal service-type web # stp enable # ssl server-policy sslpol pki-domain MYDOMAIN # interface NULL0 # interface Vlan-interface199 ip address 192.168.199.140 255.255.255.0 # interface GigabitEthernet1/0/1 poe enable stp edged-port enable # interface Ten-GigabitEthernet2/1/2 # dhcp-snooping # ntp-service unicast-server 192.168.1.71 # ssh server enable # ip https ssl-server-policy sslpol ip https enable # load xml-configuration # user-interface aux 0 1 user-interface vty 0 15 authentication-mode scheme

    Read the article

  • Help me please with this error

    - by Brandon
    I setup IIS. I moved my folder with all the files to the IIS directory. Now when I go to http://localhost/thefolder I get: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)] Luxand.FSDK.ActivateLibrary(String LicenseKey) +0 FaceRecognition._Default.Page_Load(Object sender, EventArgs e) in D:\Project Details\Layne Projects\DotNet Project\FaceRecognition\FaceRecognition\Default.aspx.cs:60 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) +42 System.Web.UI.Control.OnLoad(EventArgs e) +132 System.Web.UI.Control.LoadRecursive() +66 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +2428

    Read the article

  • OS X: Storing MySQL data securely, on an encrypted FileVault image using a soft link

    - by GJ
    I am trying to get a macports-installed MySQL to use a data directory stored inside my FileVault-protected home dir. I used sudo cp -a /opt/local/var/db/mysql5 ~/db/ (the -a to ensure file permissions remain intact) and then replaced the original mysql5 directory with a soft link: sudo ln -s ~/db/mysql5 /opt/local/var/db/mysql5 However, when I now try to start MySQL it fails. It follows the soft link at least to the extent that it modifies some files in the ~/db/mysql5 dir, notably the error log which gets appended to it this: 110108 15:33:08 mysqld_safe Starting mysqld daemon with databases from /opt/local/var/db/mysql5 110108 15:33:08 [Warning] '--skip-locking' is deprecated and will be removed in a future release. Please use '--skip-external-locking' instead. 110108 15:33:08 [Warning] '--log_slow_queries' is deprecated and will be removed in a future release. Please use ''--slow_query_log'/'--slow_query_log_file'' instead. 110108 15:33:08 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead. 110108 15:33:08 [Warning] Setting lower_case_table_names=2 because file system for /opt/local/var/db/mysql5/ is case insensitive 110108 15:33:08 [Note] Plugin 'FEDERATED' is disabled. 110108 15:33:08 [Note] Plugin 'ndbcluster' is disabled. /opt/local/libexec/mysqld: Table 'mysql.plugin' doesn't exist 110108 15:33:08 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 110108 15:33:09 InnoDB: Started; log sequence number 4 1596664332 110108 15:33:09 [ERROR] /opt/local/libexec/mysqld: Can't create/write to file '/opt/local/var/db/mysql5/mac.local.pid' (Errcode: 13) 110108 15:33:09 [ERROR] Can't start server: can't create PID file: Permission denied 110108 15:33:09 mysqld_safe mysqld from pid file /opt/local/var/db/mysql5/gPod.local.pid ended I can't see why MySQL can't create the pid file, since manually creating it using the _mysql user succeeds (sudo -u _mysql touch mac.local.pid from inside ~/db/mysql5) Any ideas how to resolve this?

    Read the article

  • Group policy not applying to security group

    - by ihavenoideawhatimdoing
    Preface: I have enough privileges to create GPOs in my OU, and have made a few of them for some simple tasks (like deploying a printer to certain users). Not actually a sysadmin...I'm a developer who is winging it. I wanted to create a GPO that would set a mapped folder for a certain security group (which I recently created and that contains only myself). Did the following: Created the GPO in MyOU - Users Removed the default Authenticted Users under Security Filtering Add the security group with my account to Security Filtering Set up the mapping via the User Configuration option Changed GPO Status to "Computer configuration settings disabled" Left WMI filtering to Closed the GPO at this point... Logged in as the target user; ran gpupdate /force Logged out, logged in, ran gpresult /r, no mention of my GPO Rebooted Logged in, re-ran gpupdate /force Logged out, logged in, ran gpresult /r, still no mention of my GPO If I log in with another completely different user, their RSOP information shows that the new GPO is being ignored due to a security restriction, so it appears to be "working" for other users. I just can't get it to actually show up in RSOP for the user it should be working. Is there anything else I can do short of rebooting endlessly and crossing my fingers?

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Installing Windows 7 from USB on a Thinkpad T61

    - by Halik
    I am trying to install Windows 7 Professional from USB 3.0 flashdrive, on a Thinkpad T61. The problem is, Thinkpads BIOS will not detect the flash drive as bootable medium, and won't allow to boot from it. What I did: Enabled USB BIOS Support in BIOS (it was on by default) In startup menu, added USB HDD to boot order (it has '-' sign in front of it) Created Windows 7 install media with UNetbootin, WinUSB (linux tool) dd and Grub4DOS. As you can tell, currently, I only have access to Linux machine to make the flashdrive. What happens: The T61 BIOS shows '-USB HDD' in boot order menu. The '-' sign suggests that the plugged flash drive is currently not bootable. The same flashdrive (with the same Windows image on it) is booting without any problems on a Dell D430 and Lenovo Y550. Also, Ubuntu 12.04 install USB created with Unetbootin shows as bootable ('+' sign in BIOS boot order menu) and boots from the F12 boot menu. Additional info thinkwiki.org says that some Thinkpad BIOSes do not use MBR on flashdrives. It suggests using Extended-IPL boot loader, but the provided links are broken and there seems to be no mirrors. Solution: http://superuser.com/a/430186/54970

    Read the article

  • ServerName wildcards in Apache name-based virtual hosts?

    - by Martijn Heemels
    On our LAN I've set up several 'fake' TLDs in the DNS server, with the intention of using them for Apache name-based virtual hosting. I'd like to combine this with mass-virtual-hosting (i.e. VirtualDocumentRoot) on an Ubuntu 10.04 LAMP server. However, I can't get it to select the right vhost! Here is a summary of the Apache config: NameVirtualHost 10.10.0.205 <VirtualHost 10.10.0.205> ServerName *.test VirtualDocumentRoot /var/www/%-3.0.%-2/test/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> <VirtualHost 10.10.0.205> ServerName *.dev VirtualDocumentRoot /var/www/%-3.0.%-2/dev/%1/ CustomLog /var/log/apache2/access.log vhost_combined </VirtualHost> A hostname such as www.domain.com.dev, correctly resolves to 10.10.0.205, but always selects the top vhost, instead of the bottom one, which matches more closely. I was under the impression that Apache would first try to match the ServerName before defaulting to the top vhost for a given IP. What am I doing wrong? Or is this not possible and must I use another IP for each TLD? apachectl -S outputs (trimmed): 10.10.0.205:* is a NameVirtualHost default server *.test port * namevhost *.test port * namevhost *.dev

    Read the article

  • Fixed and dynamic IPs in ISC DHPD lead to double lease

    - by GorillaPatch
    I would like to have a small dynamic adress part and the most clients are assigned a fixed IP adress. My dhcpd.conf looks like this: use-host-decl-names on; authoritative; allow client-updates; ddns-updates on; # Einstellungen fuer DHCP leases default-lease-time 3600; max-lease-time 86400; lease-file-name "/var/lib/dhcpd/dhcpd.leases"; subnet 192.168.11.0 netmask 255.255.255.0 { ddns-updates on; pool { # IP range which will be assigned statically range 192.168.11.1 192.168.11.240; deny all clients; } pool { # small dynamic range range 192.168.11.241 192.168.11.254; # used for temporary devices } } group { host pc1 { hardware ethernet xx:xx:xx:xx:xx:xx; fixed-address 192.168.11.11; } } The motivation for the pool declaration with deny all hosts comes from the ISC DHCPD homepage http://www.isc.org/files/auth.html This will allow hosts to be first added to the network, where they will receive a temporary IP from the 241-254 adress range and then later write an explicit host declaration. Upon next connect it will receive the right configuration. The problem is that I am getting error messages that 192.168.11.13 has a dynamic and a static lease. I am a bit confused as I expected the pool declaration with deny all clients would not count as dynamic. Dynamic and static leases present for 192.168.11.13. Remove host declaration pc1 or remove 192.168.11.13 from the dynamic address pool for 192.168.11.0/24 Is there a way to have the DHCP server send an DHCPNA to clients if they have a host statement and retain this dynamic range?

    Read the article

  • How to change libcurl SSL backend from gnutls to openssl on Ubuntu server

    - by Jayesh
    I am getting gnutls specific errors in my Tornado webserver while processing Google OpenID SSL responses. One of the suggestions I got from Tornado mailing list is to try OpenSSL backend instead of gnutls. But it doesn't seem to be straightforward on Ubuntu server (11.10). On Ubuntu server, gnutls is provided by libcurl3-gnutls package and openssl curl support is provided by libcurl4-openssl-dev package. (I don't know why the later is named 4 and dev, but I couldn't find any other openssl+curl package in apt-cache search). I had libcurl3-gnutls installed by default, but not libcurl4-openssl-dev. So I installed the later and restarted Torando instances. But that didn't seem to work. I still got same gnutls errors. I found old discussions on curl mailing lists regarding the problems of supporting different SSL backends to libcurl, but didn't find exactly how is it done today. So far my guess is openssl is built into libcurl and gnutls is provided through separate package (that will explain why there is no libcurl3-openssl). But how do I make libcurl to pick up openssl backend and not gnutls? Is there some option in libcurl/pycurl API to do this? I tried uninstalling libcurl3-gnutls, but apt-get prompted that it will also remove python-pycurl along with it. So that won't do.

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >