Search Results

Search found 86974 results on 3479 pages for 'visualsvn server'.

Page 1109/3479 | < Previous Page | 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116  | Next Page >

  • Samba access works with IP address only

    - by Sebastian Rittau
    I added a Debian etch host (hostname: webserver, IP address: 192.168.101.2) running Samba to a Windows network with a Windows 2003 PDC (IP address 192.168.101.3). The Samba server exports a public guest share, called "Intranet". The server shows up fine in the network, but trying to click on it produces an error dialog, stating I don't have the necessary permissions. So does entering \webserver manually and using \webserver\internet states that the path does not exist. Interestingly, accessing the share by IP address (\192.168.101.2 or \192.168.101.2\intranet) works fine. DNS is configured correctly, and "smbclient //webserver/intranet" on another Linux client works fine. One complicating issue is that the webserver is only a VMware virtual machine running on PDC server. Here is our smb.conf: [global] workgroup = Foobar server string = Webserver wins support = yes ; commenting out these wins server = 192.168.101.3 ; two lines has no effect dns proxy = no guest account = nobody [... snipped some unrelated bits, like logging ...] security = share [... snipped some password-related things ...] domain master = no [intranet] comment = Intranet path = /srv/webserver/contents browseable = yes guest ok = yes guest only = yes read only = yes create mask = 0775 directory mask = 0775

    Read the article

  • Logging upload attempt with proftpd

    - by Amit Sonnenschein
    I have a logging server that i use with external hardware, the idea is that a special hardware is uploading logs about it's operation every few hours and from the server i can do whatever i need to do with the information, the old server was getting a bit too old and i've moved to a new one, i've install lamp,proftpd and ssh (just the same as i had on the old server). now for some reason the logs are not being uploaded and i don't know why. the hardware uses a direct ftp access - i've the proftpd.log and saw that the connection is not being rejected (just to make sure i didn't make a mistake with the user/pass) my problem is that for some reason the upload itself is failing... it might be due to wrong path (as it's hard coded in the hardware) but i can't really know as proftpd wont give me any details.. i've tried to change the loglevel to "debug" thinking it would give me more information but i don't see any change... is there any other way i can make sure proftpd logs EVERTHING ?

    Read the article

  • Is it possible to create an SFTP drop box?

    - by Jordan Reiter
    I have a Windows server with folders accessible via SFTP (server is running OpenSSH). scp is blocked. I would like to copy files from a Linux server to the Windows server. SFTP seems like a good option. Ideally I'd like something similar to an FTP drop box, so that the Linux box could just copy files directly over to the Windows box. I'm also open to any solutions to this that would allow me to copy the files while offering the least amount of hassle. The language I'd be using on the Linux box is python; not sure if that factors in or not.

    Read the article

  • An equivalent of IceCast but for Live Video Streaming ?

    - by Kedare
    Hello, I am looking for a solution to Stream live video like that : A camera/webcam/video output ---> Stream server ---> Clients And if possible multiple Stream Servers like this (like IceCast): A camera/webcam/video output --> Master Stream server +---> Slave Stream Server ---> Clients | `--> Clients | `--> Slave Stream Server ---> Clients `--> Clients The clients will be in flash, so I think RTMP should be a good protocol, I've heard of Red5, is it good for that ? Does it scale ? I would like to get statistics (Amount of clients, Bandwidth, etc), is it possible with red5 ? Do you know any other good solution to do that ? (Only free and if possible Open Source) Thank you !

    Read the article

  • SSH Tunneling for Munin

    - by Dennis Wisnia
    I had at home an NAS and in the datacenter a Server. I make an SSH Tunnel with the following command: autossh -fN -M20404 -R 1337:localhost:22 user@server (from the nas to the server) Its working and I can access the NAS. Now, I want access the munin-node, also I make a new tunnel from the server to the nas: ssh -N -R 49499:localhost:4949 localhost -p 1337 but if I make an nmap localhost -p 49499 the port is closed and i cant access the munin-node. I don't know why and I am very happy about your help.

    Read the article

  • Install anonymous proxy on Ubuntu

    - by jack
    How to install an anonymous proxy in Ubuntu 9.10 server which listens on every public network ethernet interfaces? I have other service like Nginx, MySQL running on that server so I hope the proxy server wont conflict with them.

    Read the article

  • Problems with MGCP proxy creation

    - by Popof
    Hi, I'm trying to bypass my ISP router with my FreeBSD server (I've an optical connection so I've a RJ45 used to connect the box to WAN) Internet and TV are working fine (Using igmpproxy to forward TV stream) but I've a problem with phone. ISP's box is connected to the server which gives it a LAN address. The problem is that when the box builds MGCP packets (and especially SDP ones) it uses its LAN address. So I've think of writing an UDP proxy to handle MGCP and SDP packets in order to replace LAN address with server WAN address and then forward packet to WAN. Before starting coding I've captured stream packets using my server as a bridge between WAN connection and the ISP's box. And, in order to see if my solution is viable, I've tried to send those packets to the box using nemesis. I tried to send a packet (found in capture) containing an endpoint audit: AUEP 1447 aaln/[email protected] MGCP 1.0 F: A In the wireshark capture the box replied: 200 1447 OK A: a:PCMU;PCMA;G726-16;G726-24;G726-32;G726-40;G.723.1-5.3;G.723.1-6.3;G729;TELEPHONE-EVENT, fmtp:"TELEPHONE-EVENT 0-15,144,149,159", p:10-30, b:4-40, e:on, t:00, s:on, v:L;M;G;D, m:sendonly;recvonly;sendrecv;inactive;confrnce;replcate;netwtest;netwloop, dq-gi But when I use nemesis, I got an ICMP error: Port unreachable (Type 3, Code 3). To build this packet, WAN source address of the capture is replaced with my server LAN address, using the mgcp-callagent port (2727) and the packet is sent to the LAN address of the box at mgcp-gateway port (2427). The command I use is nemesis udp -S 192.168.2.1 -D 192.168.2.2 -x 2727 -y 2427 -P packet_to_send. I also tried an UDP scan to the box on callagent and gateway port: PORT STATE SERVICE 2727/udp open|filtered unknown 2427/udp closed unknown I found those results a little bit strange because it should be the 2427 port opened, as it was in capture. Internet Protocol, Src: <ISP MGCP Server>, Dst: <My WAN Address> User Datagram Protocol, Src Port: mgcp-callagent (2727), Dst Port: mgcp-gateway (2427) Does someone has any idea about how having my box responding to my requests ? Thanks in advance and sorry for my english.

    Read the article

  • SSH issues: Read from socket failed: Connection reset by peer

    - by nitins
    I compiled OpenSSH_6.6p1 on one of our server. I am able login via SSH to the upgraded server. But I am not able to connect to other servers running OpenSSH_6.6p1 or OpenSSH_5.8 from this. While connecting I am getting an error as below. Read from socket failed: Connection reset by peer On the destination server in the logs, I am seeing it as below. sshd: fatal: Read from socket failed: Connection reset by peer [preauth] I tried specifying the cipher_spec [ ssh -c aes128-ctr destination-server ] as mentioned in here and was able to connect. How can configure ssh to use the cipher by default? Why is the cipher required here?

    Read the article

  • Backup system, two locations. Recommendations?

    - by Ragnar123
    Hi there, I have two servers running Ubuntu 10.10, placed at two different locations. One is production, and one development. I wondered, if any of you had experience with backing up, best practices and alike. I think a smart solution would be to backup the data on the production server to the development server. Also, I have looked into visualization, but it seems like an overkill, assuming the servers only server about 8 users in a small company.

    Read the article

  • Scheduling VMWare ESXi 4.1 VM Restart

    - by Robin Day
    We had a virtual machine running on a VMWare Server host on Windows Server 2003. The machine is set up with non persistent disks. We had a windows task schedule set up that ran a batch file to reset the machine each week so that it returned to it's original state. The batch file that we had running was: "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName.vmx" stop hard "C:\Program Files\VMware\VMware Server\vmware-cmd" "C:\Virtual Machines\VirtualMachineName\VirtualMachineName1.vmx" start We have since migrated this machine to the free version of ESXi 4.1. Can anyone let me know if and how it is possible to schedule such a restart?

    Read the article

  • Redirect Root Directory to Subdirectory using mod_rewrite

    - by manyxcxi
    I am trying to redirect /folder to / using .htaccess but all am I getting is the Apache HTTP Server Test Page. My root directory looks like this: / .htaccess -/folder -/folder2 -/folder3 My .htaccess looks like this: RewriteEngine On RewriteCond %{REQUEST_URI} !^/folder/ RewriteRule (.*) /folder/$1 What am I doing wrong? I checked my httpd.conf (I'm running Centos) and the mod_rewrite library is being loaded. As a side note, my server is not a www server, it's simply a virtual machine so it's hostname is centosvm.

    Read the article

  • In APC+PHP, how much RAM is too much? Is it okay to set apc.shm_size to many GB?

    - by Jeremy Clarke
    On our server we have a LOT of RAM for our traffic levels (16GB). The HTTP processes regularly eat up all CPU and need to be restarted without even getting close to using swap memory, so I'm looking for ways to spend RAM to ease the load on Apache (and/or help the seperate MySQL server which may be breaking Apache). I have many WordPress installs on the HTTPD instance so APC sometimes uses as much as 900MB of ram (according to the apc.php charts). Just in case I have apc.shm_size set to 1600MB which is more than it needs but not more than I can spare. This means there is usually lots of extra RAM available to APC but also very little turnover and fragmentation is never more than 1%. Is this dangerous? Should I be slimming down APC to less than 1GB just on principle? Should I be expecting some turnover within APC in the name of bringing it's overall footprint down? Having so much memory devoted to APC means that in top/htop every single httpd process shows ~1.9GB in the VIRT memory column. Obviously this is shared memory and not used per-process, but could it be hurting our server? NOTE: The problem with the server remains unclear but the effect is that about 60 times a day all 8 CPU's fill up to 100% and everything stops working until Monit sees that Apache is broken and restarts it (Monin also saves the MySQL server). I'm not sure if APC is even part of the problem but I'm trying to optimize everything just in case.

    Read the article

  • DNS lookup fails when with all the MAC workstations

    - by user39564
    Hi, I am having this insane problem. We are mac-heavy users. Around 10 workstations, one Xserve server, two windows workstation and one Linux (me). Last year I added an A record to our domain's DNS. However we had to change that a few months ago to a new IP. But all the Mac workstations fail to resolve the proper DNS and they still resolve to the old IP, even after 2 months. On both the windows workstation and my linux box a simple nslookup resolves to proper IP. However, on ALL the mac workstation, dig and nslookup report the old IP address. From my linux workstation: jp@lo:~$ nslookup - 208.67.222.222 client.xyz.com Server: 208.67.222.222 Address: 208.67.222.222#53 Non-authoritative answer: Name: client.xyz.com Address: 68.71.40.xx But when I am trying the exact same command from any Mac workstation, I get the old IP: $ nslookup - 208.67.222.222 client.xyz.com Server: 208.67.222.222 Address: 208.67.222.222#53 Non-authoritative answer: Name: client.xyz.com Address: 98.143.155.xx The strange thing is that this only happens in our internal network. No problem from home nor from another server. I did try to flush the DNS, don't worry. It did not help. I am starting to wonder if my router (OpenWRT) or Mac OS X Server is not in some way spoofing the DNS request and thus acting as a cache. Any suggestions/comments would be grateful. Thank you, JP

    Read the article

  • xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory)

    - by mazgalici
    root@mazgalici:~# startx X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-28-server i686 Ubuntu Current Operating System: Linux mazgalici 2.6.18-194.26.1.el5.028stab079.2PAE #1 SMP Fri Dec 17 19:34:22 MSK 2010 i686 Kernel command line: quiet Build Date: 10 November 2010 11:25:26AM xorg-server 2:1.7.6-2ubuntu7.4 (For technical support please see ) Current version of pixman: 0.16.4 Before reporting problems, check to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Jan 11 01:28:48 2011 (==) Using config directory: "/usr/lib/X11/xorg.conf.d" Fatal server error: xf86OpenConsole: Cannot open /dev/tty0 (No such file or directory) Please consult the The X.Org Foundation support at http://wiki.x.org for help. Please also check the log file at "/var/log/Xorg.0.log" for additional information. ddxSigGiveUp: Closing log

    Read the article

  • Connection between Asp.Net and Oracle 10g Express Edition

    - by l3gion
    Hello, I'm struggling to find a way to connect my Asp .Net + C# application with my Oracle 10g Express Edition. Here's my scenario, I'm at Mac OS and I have 2 Virtual machines, one for Win 7 (VS 2010 app) and another with a Parallels Virtual Appliance with Oracle 10g Express Edition 1.1. Which provider (Oledb, ODP.NET, etc..) should I use? How to make the connection to the server in C#? Right now I have this: <appSettings> <add key="conn" value="Data Source=10.211.55.11;Persist Security Info=True;User ID=l3gion;Password=l3gion;" /> </appSettings> And at the .cs file: SqlCommand cmd = new SqlCommand("insert_thing", new SqlConnection(ConfigurationManager.AppSettings["conn"])); cmd.CommandType = CommandType.StoredProcedure; *insert_thing is a stored procedure Using this I got this error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) I've searched for some possible solutions. Tried some, including: firewall disabled, allow remote connection at oracle express edition using this cmd line ("EXEC DBMS_XDB.SETLISTENERLOCALACCESS(FALSE);").. The error persists. Can anyone guide me into the right direction? I'm a newbie with this type of things. Thank you for your patience. regards

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • Can ping between Host and Guest, but can't acces webserver with Virtualbox

    - by Gastoni
    How come I can ping back and forth between host and guest using VirtualBox, but I can't access from the host the web server installed in the guest. I'm using a host-only network. Host Ubuntu 10.10 vboxnet0 - 192.168.56.1 ping to self, works ping to guest, works access to web server in guest, FAILS Guest Fedora 13 eth1 - 192.168.56.101 ping to self, works ping to host, works access to web server in host, works

    Read the article

  • Red Hat 5.4 slow processing

    - by yucefrizk
    I'm running Red Hat Linux 5.4 on HP DL580 server with 16 processors and 64 GB of RAM. I'm connecting to the server remotely through SSH. after entering the password, it takes time to return the command line, if I click ctrl+c during this time, I'll have the command line prompt but not the correct bash prompt (I have to run bash to pass to my correct prompt). I tried to install Apache on the server, ./configure took 4 hours to finish instead of 1 or two minutes, Oracle installation same behavior. Server Disks are mirrored using RAID controller. any idea what could be the reason of this slowness?

    Read the article

  • Nginx Multiple Domains

    - by showFocus
    I am trying to add a second virtual host to nginx. When i go to the new domain it redirects to the old one. I have tried restarting Nginx, rebooting the server. Has anyone come across this before, care to share? File: nginx.conf ### user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay off; keepalive_timeout 5; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } File: ../sites-enabled/domain1.co.uk server { listen 80; server_name www.domain1.co.uk; rewrite ^/(.*) http://domain1.co.uk/$1 permanent; } server { listen 80; server_name domain1.co.uk; access_log /home/me/public_html/domain1.co.uk/log/access.log; error_log /home/me/public_html/domain1.co.uk/log/error.log; location / { root /home/me/public_html/domain1.co.uk/public/; index index.php index.html; # WordPress supercache & permalinks. include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain1.co.uk/public/$fastcgi_script_name; } } File: ../sites-enabled/domain2.co.uk server { listen 80; server_name www.domain2.co.uk; rewrite ^/(.*) http://domain2.co.uk/$1 permanent; } server { listen 80; server_name domain2.co.uk; access_log /home/me/public_html/domain2.co.uk/log/access.log; error_log /home/me/public_html/domain2.co.uk/log/error.log; location / { root /home/me/public_html/domain2.co.uk/public/; index index.php index.html; # Basic version of WordPress parameters, supporting nice permalinks. # include /usr/local/nginx/conf/wordpress_params.regular; # Advanced version of WordPress parameters supporting nice permalinks and WP Super Cache plugin include /usr/local/nginx/conf/wordpress_params.super_cache; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/me/public_html/domain2/public/$fastcgi_script_name; } }

    Read the article

  • LDAP + NFS + automount home directories permissions issue

    - by noobishguy
    When an LDAP user logs into the system they have incorrect permissions to their home directory. LDAP and NFS services exist on the same server. The directory shows the correct ownership / permissions: drwx------. 4 ldaptest ldaptest 4096 Jun 9 2014 ldaptest however the UID / GID do not match those on the server client: bash-4.1$ id uid=10001(ldaptest) gid=10001(ldaptest) groups=10001(ldaptest) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 server: [root@ldap1 log]# id ldaptest uid=502(ldaptest) gid=502(ldaptest) groups=502(ldaptest) How do I resolve this?

    Read the article

  • Disk Activity Alert Windows SBS 2003 on Dell PowerEdge 830 with Raid

    - by Ron Whites
    Background: I have a Dell PowerEdge 830 Server running Windows SB Server 2003. It has 4gbs of RAM and a ATA CERC SATA 6CH controller with 3 160gb drives in a Raid 5 configuration. The Problem I am seeing Admin ---"Disk Activity Alert on Server" emails These often occur when disk backups, de-frag or high disk usage is going on. Generally the server isn't over stressed. The Disk Alert emails say in part ... The following disk has low idle time, which may cause slow response time when reading or writing files to the disk. Disk: 0 C: F: D: Review the Disk Transfers/sec and % Idle Time counters for the PhysicalDisk performance object. If the Disk Transfers/sec counter is consistently below 150 while the % Idle Time counter remains very low (close to 0), there may be a problem with the disk driver or hardware. The Questions I have: With what utility can I review the Disk Transfers/sec and Idle Time? It appears there is no utility for that on the server! I think I may need to download a very large (two DVD) Dell "OpenManage" utility to be able to monitor the raid system and see what is a problem is that true?

    Read the article

  • Hosting multiple websites from home

    - by dean nolan
    I have just been accepted for Microsofts Wevsite Spark program which I mainly got for the tools, Visual Studio, Blend. I also have a few of my own websites, personal and a couple of business ones. I also work freelance and sometimes I would like a place to just put a demo up of a clients project. The websites I currently have are all on differnet hosting provders and domain registrars. The WebsiteSpark comes with Windows Server 2008 and SQL Server 2008. It would be really advantagous of me to have all these in one place but also so I have complete control over the database and the environment. So I am thinking over the next 4-6 months of migrating all this to my own server that I will host from home, or maybe even setup at home and then store in a proper datacentre. I was wondering what steps I should take and what to be aware of, specifically: 1) having all these different websites on one computer and having the url got to the proper place. 2) Cost effectiveness? Having the server in home as apposed to datacentre. Most solutions I see charge over £1000 a month to have a machine in datacentre. This is mostly for my own ease of management and shared hosting which I currently have is very limited configuration wise. Would getting a server in house be beneficial for then upgrading to the cloud? What measures should I take with my ISP? I know this is a lot I've asked but just even links to good articles would be good. Thanks

    Read the article

  • How can I use HAproxy with SSL and get X-Forwarded-For headers AND tell PHP that SSL is in use?

    - by Josh
    I have the following setup: (internet) ---> [ pfSense Box ] /-> [ Apache / PHP server ] [running HAproxy] --+--> [ Apache / PHP server ] +--> [ Apache / PHP server ] \-> [ Apache / PHP server ] For HTTP requests this works great, requests are distributed to my Apache servers just fine. For SSL requests, I had HAproxy distributing the requests using TCP load balancing, and it worked however since HAproxy didn't act as a proxy, it didn't add the X-Forwarded-For HTTP header, and the Apache / PHP servers didn't know the client's real IP address. So, I added stunnel in front of HAproxy, reading that stunnel could add the X-Forwarded-For HTTP header. However, the package which I could install into pfSense does not add this header... also, this apparently kills my ability to use KeepAlive requests, which I would really like to keep. But the biggest issue which killed that idea was that stunnel converted the HTTPS requests into plain HTTP requests, so PHP didn't know that SSL was enabled and tried to redirect to the SSL site. How can I use HAproxy to load balance across a number of SSL servers, allowing those servers to both know the client's IP address and know that SSL is in use? And if possible, how can I do it on my pfSense server? Or should I drop all this and just use nginx?

    Read the article

  • hMailServer Email + MX Records Configuration

    - by asn187
    Trying to make DNS changes to enable email to be sent using hMailServer. My mail server is on a separate machine with a separate IP Address. I have already added MyDomain.com and an email account I have create a MX Record with the mail server being mail.domain.com an a priority on 20. 1) But the question is how do I now link this MX record for the domain to my mail server/ mail server IP Address? 2) What changes are needed in hMailServer to complete the process and be able to send emails for the domain? 3) In Settings SMTP Delivery of email: What should my configuration here look like?

    Read the article

  • On clients use generic driver for printer shared by CUPS

    - by Daniel
    I know this worked really easy with a recent CUPS version some years ago. Unfortunately I fail to do the following with latest CUPS nowadays. How to share a printer without using printer specific drivers on the clients with CUPS? The printer is a Samsung ML-2010. The important fact is that it needs a quite specific library for printing called splix. That is installed on the server and prints well. What makes trouble is using that printer over the network. I found out how to use Avahi under Linux to make use of DNSSD to advertise and discover printers. But as far as I understand the new CUPS offers the internal driver interface on the network. This has 2 major issues: anybody can fiddle with the driver and I don't trust any uncommon printer library to be "network secure" anyone who wants to use my printer including guests needs to install the specific drivers first I remember the old days when I could enable "Share this printer" on the CUPS server and all clients would magically detect the printer and just send their job data to the server and have it do the driver stuff. After everything a read I guess this is related to the changes Apple introduced with CUPS including dropping of the integrated network discovery protocol. If it helps: Server: Ubuntu LTS 12.04 Server with CUPS 1.5.3 Client: Arch Linux with current CUPS 1.6.1 On another box with Ubuntu the printer was setup automatically at least but the mechanism used the Splix library for that.

    Read the article

< Previous Page | 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116  | Next Page >