Search Results

Search found 18249 results on 730 pages for 'real world haskell'.

Page 564/730 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Apache, Tomcat and mod_jk for load balancing

    - by pHk
    Hi guys. I've set-up a basic Apache (2.2.x) and Tomcat (6.0.x) set-up using mod_jk for load balancing using the worker.properties file. Preliminary testing seems to show that this works relatively well, and it was quite easy to set-up. However; the fact that it was so easy to set-up has got me a little worried. We're dealing with 100 - 300 concurrent users using the same web application (deployed on 2 or 3 Tomcat instances). I have done a little Googling and looking around on here and there seems to be more than 1 way to accomplish this (one example on here used a balancer:// style URL, which I've never seen before in an Apache config). For example, one question I ask myself is how reliable the load detection on mod_jk really is (Busyness, Session, Request, etc). In your experience, does this set-up prove to be reliable in real world scenarios? Any pointers on improvements, pit falls or interesting literature/articles? I've worked with Apache before, but am in no way an expert. Thanks in advance.

    Read the article

  • Nginx, as reverse proxy, could not proxy_pass to a domain pointing to the local JBOSS

    - by larryzhao
    My environment is Ubuntu 12.04, Nginx 1.20, and Torquebox 2.0.3 which is actually JBoss AS 7. I have two app deployed on Torquebox, it listens to 8080 and have different hostnames, app1.mydomain.com and app2.mydomain.com. I added 127.0.0.1 app1.mydomain.com and 127.0.0.1 app2.mydomain.com in /etc/hosts then I curl app1.mydomain.com:8080 and curl app2.mydomain.com:8080 both have correct return. Then I go to my nginx. I would like nginx to pass the visit to www.app1.com to app1.mydomain.com:8080, so I have the following configuration: # primary server - proxypass to torquebox server { listen 80; server_name www.app1.com; access_log off; error_log off; # proxy to Torquebox location / { proxy_pass http://app1.mydomain:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } } But it doesn't work. curl www.app1.com returns nothing. And if I visit www.app1.com in Safari, the http return code is 404. I don't know why, need help.

    Read the article

  • How to host an ssh server?

    - by balki
    Hi, I have a broadband internet connection. I have an wireless modem (Airtel India). I don't have a static ip address. I want to host a ssh/web/ftp server to be visible to the outside world just for testing and learning purpose so I can ask my friend to connect to my current ip address and test. My modem has an admin interface which allows to port forward and open ports. I set up ssh server as shown and checked if port 22 is open using this website , Port Scan And port 22 is open. I have an openssh server running and it works if i do, ssh [email protected] which is my local ip address but doesn't work if i do ssh [email protected] where 122.xx.xx.xx is my external ip address of my modem which i checked from whatismyipaddress.com. Since it looks like the port is open, I wonder if there is some setting I need to change in my server config to expose my server. How should I go about solving this?

    Read the article

  • Nginx terminate SSL for wordpress

    - by Mike
    I have a bit of a problem. We run a wordpress blog behind a ngnix proxy and looking to terminate the ssl on the nginx side. Our current nginx config is upstream admin_nossl { server 192.168.100.36:80; } server { listen 192.168.71.178:443; server_name host.domain.com; ssl on; ssl_certificate /etc/nginx/wild.domain.com.crt; ssl_certificate_key /etc/nginx/wild.domain.com.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_ciphers RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; location / { proxy_read_timeout 2000; proxy_next_upstream error; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_max_temp_file_size 0; proxy_pass http://admin_nossl; break; It just does not seem to work. If I can hit https://host.domain.com but it quickly switches back to non-secured from what I can see. Any pointers?

    Read the article

  • Why do moving lines become fuzzy on my monitor?

    - by CodeInChaos
    I recently got a new notebook. With moving images there are some graphical issues, and I'd like to know what causes them. None of my earlier monitors exhibited similar issues. Moving high contrast lines become jagged, similar to interleaved videos. When moving a horizontal line vertically those artifacts are colored, when moving a vertical line horizontally they aren't colored. The effect isn't observable in static images. And when moving faster the zone in which it occurs becomes wider. The effect is very visible if I move a window around on the borders of the window and wherever high contrast lines appear. But it appears when watching videos too. The vertical line in that image moves to the right, the horizontal line upwards. The effect is most likely related to the fact that each real pixel consists of different sub-pixels for the different color channels. But how are these causing the observed effect? Is the change at which the different colors change to the destination brightness different? The optical impression is that every second pixel in a chess board like arrangement is adapting slower than it's neighbors. But that doesn't make much sense.

    Read the article

  • How can I make IPv6 on OpenVPN work using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 server side (set as gateway on client) 2001:db8::100:abc:2 tap0 client side 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • How can I setup OpenVPN with IPv4 and IPv6 using a tap device?

    - by Lekensteyn
    I've managed to setup OpenVPN for full IPv4 connectivity using tap0. Now I want to do the same for IPv6. Addresses and network setup (note that my real prefix is replaced by 2001:db8): 2001:db8::100:0:0/96 my assigned IPv6 range 2001:db8::100:abc:0/112 OpenVPN IPv6 range 2001:db8::100:abc:1 tap0 (on server) (set as gateway on client) 2001:db8::100:abc:2 tap0 (on client) 2001:db8::1:2:3:4 gateway for server Home laptop (tap0: 2001:db8::100:abc:2/112 gateway 2001:db8::100:abc:1/112) | | | (running Kubuntu 10.10; OpenVPN 2.1.0-3ubuntu1) | wifi | | router | | OpenVPN INTERNET | eth0 | /tap0 VPS (eth0:2001:db8::1:2:3:4/64 gateway 2001:db8::1) (tap0: 2001:db8::100:abc:1/112) (running Debian 6; OpenVPN 2.1.3-2) The server has both native IPv4 and IPv6 connectivity, the client has only IPv4. I can ping6 to and from my server over OpenVPN, but not to other machines (for example, ipv6.google.com). net.ipv6.conf.all.forwarding is set to 1, I've tried disabling net.ipv6.conf.all.accept_ra as well, without luck. Using tcpdump on both the server and client, I can see that packets are actually transferred over tap0 to eth0. The router (2001:db8::1) send a neighbor solicitation for the client (2001:db8::100:abc:2) to eth0 after it receives the ICMP6 echo-request. The server does not respond to that solicitation, which causes the ICMP6 echo-request not be routed to the destination. How can I make this IPv6 connection work?

    Read the article

  • Transcoding media server streaming to the iPhone

    - by pilif
    I have a huge collection of videos in different formats, but with one thing in common: They are not playable on an iPhone (or iPod Touch). Instead of complaining about Apple's IMHO broken world view ("there are no video formats but quicktime and mp4"), I wonder if there's a solution out there that allows streaming these different videos to the iPhone. This means that the source media needs to be transcoded on the fly. I already tried a few solutions out there, but with varying success: PS3 Media Server kind of worked, but only once and only for one single file. TVersity is said to work, but it requires UAC to be disabled and I don't see any need for this. The solution I'm looking for should run on Windows 2008 Server or Linux. I just can't believe that there's nothing out there that would allow me to stream my huge video collection on my iPhone (we're talking Wifi here, not 3G). After looking at the answers provided and after retrying TVersity without much success, I gave Orb another try and while the web interface failed to work for me, the iPhone Application (I tried the free one at first) actually worked flawlessly. And not only that, it also manages to convert the streams on-the-fly, so you don't have to wait for the transcoding process to finish before playback starts. On my 2.26 Ghz MacMini Server, this worked even with 1080p material. For Windows 2008 Server users out there: Remember to install the Desktop Experience Feature in the Server Manager if you intend this to work. Of all the stuff I had a look at, this really provided instant-success - even though I'm now probably sending the contents of my harddrive to orb's central server (sigh)

    Read the article

  • nginx doesn't find the directory but apache does

    - by Jack Spairow
    I use apache as the backend server and nginx on the frontend. Apache listens to port 8080 and nginx to port 80. What I do is have the root point to the public folder foreach virtualhost: <VirtualHost *:8080> ServerAdmin webmaster@localhost ServerName site.com ServerAlias site.com *.site.com DocumentRoot /var/www/site.com/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/site.com/public/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost> And here's the nginx config: server { listen 80; access_log /var/log/nginx.access.log; error_log /var/log/nginx.error.log; root /var/www/site.com/public; index index.php index.html; server_name site.com *.site.com; location / { location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; proxy_cache one; proxy_cache_use_stale error timeout invalid_header updating; proxy_cache_key $scheme$host$request_uri; proxy_cache_valid 200 301 302 20m; proxy_cache_valid 404 1m; proxy_cache_valid any 15m; } } location ~ /\.(ht|git) { deny all; } } The problem is Apache resolves the domain just fine (site.com:8080), but nginx shows instead a 502 Bad Gateway (site.com:80). I tried looking at the error_log and access_log but I can't find any hint for why can't nginx work. EDIT: The problem was I wasn't able to include that isolated config for nginx.

    Read the article

  • The Story of secure user-authentication in squid

    - by Isaac
    once upon a time, there was a beautiful warm virtual-jungle in south america, and a squid server lived there. here is an perceptual image of the network: <the Internet> | | A | B Users <---------> [squid-Server] <---> [LDAP-Server] When the Users request access to the Internet, squid ask their name and passport, authenticate them by LDAP and if ldap approved them, then he granted them. Everyone was happy until some sniffers stole passport in path between users and squid [path A]. This disaster happened because squid used Basic-Authentication method. The people of jungle gathered to solve the problem. Some bunnies offered using NTLM of method. Snakes prefered Digest-Authentication while Kerberos recommended by trees. After all, many solution offered by people of jungle and all was confused! The Lion decided to end the situation. He shouted the rules for solutions: Shall the solution be secure! Shall the solution work for most of browsers and softwares (e.g. download softwares) Shall the solution be simple and do not need other huge subsystem (like Samba server) Shall not the method depend on special domain. (e.g. Active Directory) Then, a very resonable-comprehensive-clever solution offered by a monkey, making him the new king of the jungle! can you guess what was the solution? Tip: The path between squid and LDAP is protected by the lion, so the solution have not to secure it. Note: sorry if the story is boring and messy, but most of it is real! =) /~\/~\/~\ /\~/~\/~\/~\/~\ ((/~\/~\/~\/~\/~\)) (/~\/~\/~\/~\/~\/~\/~\) (//// ~ ~ \\\\) (\\\\( (0) (0) )////) (\\\\( __\-/__ )////) (\\\( /-\ )///) (\\\( (""""") )///) (\\\( \^^^/ )///) (\\\( )///) (\/~\/~\/~\/) ** (\/~\/~\/) *####* | | **** /| | | |\ \\ _/ | | | | \_ _________// Thanks! (,,)(,,)_(,,)(,,)--------'

    Read the article

  • How can I monitor VNC via Nagios?

    - by atroon
    I have a number of remote sites which have VNC running on a few computers for support purposes. They are (obviously) only available on our internal network. I am using Nagios to keep track of all the systems in the network and I want to have it check to make sure the VNC server is running on the appropriate hosts. There is a 'check_vnc' plugin available here but it relies on VNC Snapshot which I don't want to use. Certainly I could use it, but it adds more complexity and dependency, which I want to avoid. It seems simpler to just use check_tcp to make sure I get the proper response to a connection request for VNC, e.g. port 5900, send a connect string, get back framebuffer info. My real question, I suppose, is this: What is the 'proper' generic connect string for VNC (I use both UltraVNC and RealVNC) and what is the expected response? If it's really easier to use the VNC Snapshot and check_vnc, let me know. I just can't imagine that a string of text isn't easier, faster, and less bandwidth intensive to monitor.

    Read the article

  • Building vs buying a server for an academic lab [closed]

    - by Roy
    I'm looking for advice on the classic build vs buy question. We need a new linux server to run Matlab computation on in our lab (academic). Matlab parallel computing toolbox licence allows up to 12 local workers so we are aiming at a 12 core server with 4GB memory per core (total of 48gb). The system will have an SSD for the OS and a raid-5 (4x2tb) for data. I looked around and found a (relatively) cheap vendor, Silicon Mechanics, that offers a system to our liking (specs below) for $6732. However, buying the components from newegg cost only $4464! The difference is $2268 which is 50% of the base cost. If buying from a company can be thought of as a sort of insurance, basically my premiums are of 50% of the base cost which to me sounds like a lot. Of course any downtime is bad, but the work is not "mission critical", i.e. if it takes a few days to fix it when it breaks its no the end of the world. If it takes weeks to months then its a problem. If it breaks 2-3 times in 3 years, not too bad. If it breaks every month not good. In term of build experience, I set up a linux cluster in grad school (from existing computers) and I build my home pcs but I never built a server before. The server components I'm thinking about: 1 x SUPERMICRO SYS-7046T-6F 4U Tower Server Barebone Dual LGA 1366 Intel 5520 DDR3 1333/1066/800 ($1,050) 12 x Kingston 4GB 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Unbuffered Server Memory ($420) 2 x Intel Xeon E5645 Westmere-EP 2.4GHz LGA 1366 80W Six-Core ($1,116) 4 x Seagate Constellation ES 2TB 7200 RPM SATA 6.0Gb/s 3.5" ($1,040) 1 x SAMSUNG Internal DVD Writer Black SATA ($20) 1 x Intel 520 Series 2.5" 180GB SATA III MLC SSD $300 1 x LSI LSI00281 PCI-Express 2.0 x8 MD2 Low profile SATA / SAS MegaRAID SAS 9260CV-4i Controller Card, $695

    Read the article

  • Change default profile directory per group

    - by Joel Coel
    Is it possible to force windows to create profiles for members of one active directory group in a different folder from members in another active directory group? The school here uses DeepFreeze to protect public computers. In a nutshell, DeepFreeze prevents all changes to a hard drive such that every time you restart the machine the disk is identical to it was at the time you froze it. This is a bit different than restoring to an image, in that it never really wrote changes to disk in a permanent way in the first place. This has a few advantages over images: faster recover times, and it's easy to thaw the machine for a few minutes to perform maintenance such as windows updates (which can even be automated). DeepFreeze also allows you to configure a "thawspace" partition, where changes are persistent across reboots. One of the weaknesses of DeepFreeze is that you end up needing to create a new profile every time you log in, unless your profile existed at the time the machine was frozen. And even then, any changes you make to your profile while working on a frozen machine are lost. As students have frequent legitimate needs to log in to our classroom machines, there is currently a lot of cleanup involved from time to time in removing their old profiles and changes, so I want to extend DeepFreeze to protect our classroom computers as well as public computers. The problem is that faculty have a real need to keep a stateful profile locally on these classroom computers. The solution I would like to use is to configure Windows via group policy (or even manually, if that's the way I'll have to do it) to place profile folders on the thawspace partition, but only for members of the faculty security group. Is this possible?

    Read the article

  • Dell PowerEdge T710, add a new hard disk, how to?

    - by user1340802
    I need to add a new hard disk to a PowerEdge T710 running on Vmware EXSI 4. this hard disk is a 'normal' desktop hard disk 1TB (that is it is not coming from Dell, I also have no rack for it to plug it inside any of the front bay) I would like to add this disk for a virtual machine needing space, the most easily as possible. I have find that there is an avaiable sata cable with its electric power, so may I just add the disk plugging these and using the empty 5"1/4 slot available under the CD drive (with a 5"1/4 - 3"1/2 bay adaptater) ? (even if this way it seems that i bypass the raid controller that own the front bay with racks)) that way i think could be easier than adding the disk to the already defined Raid (btw i am also not sure on how to do these but i would not risk to mess the already working things) what are the other operations that i would have to do to ? (sorry I am a real beginner on Vmware EXSI and PowerEdge management :/ i have seen that there is some management from Bios (CTRL+R as start up) so that the disk will be seen or initialize it. I am really not sure of the steps needed...) thank you, best.

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • Configure Nginx to render static files and rewrite file extension or proxy_pass

    - by Pardoner
    I've set up Nginx to handle all my static files else proxy_pass to a Node.js server. It's working fine but I'm having difficulty rewriting the url so that it remove the .html file extension. upstream my_upstream { server 127.0.0.1:8000; keepalive 64; } server { listen 80; server_name staging.mysite.com; root /var/www/staging.mysite.org/public; access_log /var/logs/staging.mysite.org.access.log; error_log /var/logs/staging.mysite.org.error.log; location ~ ^/(images/|javascript/|css/|robots.txt|humans.txt|favicon.ico) { rewrite (.*)\.html $1 permanent; try_files $uri.html $uri/ /index.html; access_log off; expires max; } location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_set_header Connection ""; proxy_http_version 1.1; proxy_cache one; proxy_cache_key sfs$request_uri$scheme; proxy_pass http://my_upstream; } }

    Read the article

  • explanation of RAM specs, and what do I need for a Gaming rig

    - by ewok
    I am looking into upgrading my custom built PC's RAM. I use the machine mostly for gaming, but I don't really know a ton about RAM, so I wanted to ask a few questions. The research I've done tells me there is a negligible increase in speed for anything above 1600 MHz. is this true or is it worth the extra money to go higher? Other than drawing more power from the PSU, is there any real difference in performance with different voltages (1.5V vs 1.65V)? most of the kits I've found in the 2x4 1600 range have a CAS latency of 9 and timing of 9-9-9-24. For a significant increase in price (usually about 1.5x), I can get either 8 or 7 and lower timing. Is it worth the cost? What I am looking for here is someone to give a good explanation of what the different specs represent, and how that relates to the performance of the machine. Specifically, I'm looking for what specs I need to focus on for a good gaming rig. I am NOT looking for a "buy this, it's the best RAM" without an explanation of why. The information will be much more valuable as it will allow me to make my own informed decision. As they say, give a man a fish, he'll eat for a day. teach a man to fish, and he'll eat for the rest of his life.

    Read the article

  • Imagemagick convert creates monochrome output only

    - by rumtscho
    I have a book scan as a pdf. When I open it with Adobe Reader, it looks like grayscale. When I open it with IrfanView, it looks like grayscale, and the Information option tells me that the image is actually 24 bit (I don't know if this is the real bit depth of the image embedded in the pdf or if IrfanView assigns the maximal depth when opening a pdf as image). I want to OCR the scan with OmniPage SE. It doesn't read PDF, so I decided to use ImageMagick to convert the file to PNG first. But no matter what I try, the output is always monochrome and practically unreadable. I tried different conversion lines, with different depth, density and resize values, but it didn't help. What you see was made with the options convert testfile.pdf -density 600x600 -depth 8 PNG:testfile.png. Any idea what causes the problem? Edit: To make it clear, the output looks like this for any value of -density, -depth and -resize I have tried. It also looks like that when I use no options at all, as in convert testfile.pdf PNG:testfile.png.

    Read the article

  • Juniper NetScreen NS-5GT traffic monitoring

    - by blah
    I've done casual research into the subject and am truly dismayed at the lack of compatible tools for such a simple task. Maybe someone can provide assistance. We have a NetScreen NS-5GT in the office. I need to be able to get a glance of current traffic per endpoint -- I think the equivalent of 'get sessions' with byte counts/rates. I don't care about bars, graphs, and reports. Something as simple as a classic software firewall display would be perfect. I can't shell out money on something real like SolarWinds products, so a free solution is essential. I'm willing to do a little work but refuse to program something from scratch. It's not prudent right now for me to install a hub or otherwise mess around physically. There must be something out there I can use, maybe in combination. I don't believe I'm asking too much. Specific answers only please, e.g. monitoring software you know will actually work with this antiquated device. I've read about general approaches to the broader problem dozens of times already.

    Read the article

  • Family server setup [closed]

    - by Manny
    Hi all, I really hope some of you can give me some direction. I have setup a linux server at home and through samba I can access files from different computers in my home. I would like to use this server as a file-server for my family (brothers, sisters and parents who all live in their own homes). I really like the way it is set up right now with user and permission controls, but I've read that it is bad idea to open up the samba port to the world. The requirements are simple: 1) it should be easy to access, by using standard web browsers or mounting the drive (shouldn't have to use any VPN setup or use putty etc) 2) should be somewhat secure. We just want to share family pictures instead of putting them on facebook or picasa or other web server, nothing top secret. Here is what I've looked into: 1)Webdav. It seems decent but seems like it windows7 doesn't like it very much, even with digest mode authentication. User controls and permissions are not as flexible as samba (or at least to my knowledge). I really like the user and group permissions in samba, but if I could live with webdav if it worked seamlessly with windows, it should just work shouldn't it? 2) I read somewhere to stay away from ftp as it is outdated and that there are newer and better internet file-server setups? Was that a reference to webdav? I am so confused, please help... Manny

    Read the article

  • Limiting bandwith on an Windows 7 machine

    - by Mihai Damian
    I need to limit the bandwidth on my Windows 7 x64 machine. In the past (on XP) I've been able to use NetLimiter for similar tasks. However for some reason I can't get it to work anymore. For lower limits the bandwidth tests are able to exceed the limit by 10-50%; higher limits seem to be ignored completely and the bandwidth tests report download speeds of over 10 times the speed I set. I'm using speedtest.net and some similar service from my ISP for these tests. Anyway, I don't necessarily need a program as complex as NetLimiter since I only need to throttle my machine's bandwidth, not a specific program's. In case you are wondering why in the world I'd want to cripple my Internet speed, there is a funny story behind this. Long story short, my modem gets random disconnects. Tech support comes in, says my Internet speed is abnormally high and I must be using some tools to somehow make it go faster than it's supposed to and this messes up my modem. I check the connection with another computer and it seems that my PC is the only one in my network that gets abnormal speeds. I reinstall my OS, speed looks normal at first, after I install the batch of 50 or so updates, it goes back to abnormally high speeds and the disconnect problems are not solved. Now I don't have a clue if the explanation the tech team gave me was just a strategy to lay the blame on someone else, but I was trying to give them the benefit of the doubt and see what happens if I really reduce my speed to their specification. Any help appreciated.

    Read the article

  • How to minimize the risk of employees spreading critical information?

    - by Industrial
    Hi everyone, What's common sense when it comes to minimising the risk of employees spreading critical information to rivalling companies? As of today, it's clear that not even the US government and military can be sure that their data stays safely within their doors. Thereby I understand that my question probably instead should be written as "What is common sense to make it harder for employees to spread business critical information?" If anyone would want to spread information, they will find a way. That's the way life work and always has. If we make the scenario a bit more realistic by narrowing our workforce by assuming we only have regular John Does onboard and not Linux-loving sysadmins , what should be good precautions to at least make it harder for the employees to send business-critical information to the competition? As far as I can tell, there's a few obvious solutions that clearly has both pros and cons: Block services such as Dropbox and similar, preventing anyone to send gigabytes of data through the wire. Ensure that only files below a set size can be sent as email (?) Setup VLANs between departments to make it harder for kleptomaniacs and curious people to snoop around. Plug all removable media units - CD/DVD, Floppy drives and USB Make sure that no configurations to hardware can be made (?) Monitor network traffic for non-linear events (how?) What is realistic to do in a real world? How does big companies handle this? Sure, we can take the former employer to court and sue, but by then the damage has already been caused... Thanks a lot

    Read the article

  • Changing the default boot option without losing the boot menu

    - by hvd
    I've had a working multi-boot setup with the Windows boot loader, containing menu items for two Windows 7 systems, and one for Grub. Grub in turn contains multiple menu items, but I think that's not relevant here. I've upgraded one system to Windows 8. When I now set a different system as the default, I lose the boot menu, and I lose the possibility of booting into the other systems. I've set Windows 7 as the default, rebooted, and get Windows 7, but I don't get to choose which system to boot into. I can run its own bcdedit to change the default back to Windows 8, and another reboot shows the boot menu again, but how can I avoid defaulting to Windows 8? Here are my current boot settings, is there anything that is misconfigured? C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=F: description Windows Boot Manager locale nl-NL inherit {globalsettings} integrityservices Enable default {current} resumeobject {2f8b77f0-a30b-11e1-a9c6-a4bd8d37f662} displayorder {current} {2f8b77e3-a30b-11e1-a9c6-a4bd8d37f662} {2f8b77ee-a30b-11e1-a9c6-a4bd8d37f662} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale nl-NL inherit {bootloadersettings} integrityservices Enable recoveryenabled No allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {2f8b77f0-a30b-11e1-a9c6-a4bd8d37f662} nx OptIn bootmenupolicy Standard Windows Boot Loader ------------------- identifier {2f8b77e3-a30b-11e1-a9c6-a4bd8d37f662} device partition=D: path \Windows\system32\winload.exe description Windows 7 locale nl-NL osdevice partition=D: systemroot \Windows resumeobject {59616f59-a2ba-11e1-b73a-806e6f6e6963} nx OptIn pae Default bootmenupolicy Standard hypervisorlaunchtype Auto detecthal Yes sos No debug No Real-mode Boot Sector --------------------- identifier {2f8b77ee-a30b-11e1-a9c6-a4bd8d37f662} device partition=C: path \grub\winloader\grub.boot description Grub 2

    Read the article

  • Linux boot - stop the kernel switching to a new framebuffer mode clearing output

    - by Avio
    I'm working on an embedded system (based onUbuntu 12.04 LTS) and I'm customizing its kernel. I'm having some problem with upstart, mountall and plymouth. Nothing unsolvable I suppose, but the real problem is that I can't diagnose properly what's going on because the kernel (or maybe plymouth) changes the video mode in the middle of the boot process. This completely wipes entire lines of log and prevents any debugging of kernel misconfigurations. My Grub2 config seems to be ok with: GRUB_CMDLINE_LINUX="" GRUB_CMDLINE_LINUX_DEFAULT="acpi=force noplymouth" GRUB_GFXMODE=1024x768x32 GRUB_GFXPAYLOAD_LINUX=keep Here is some relevant output of lspci: 00:00.0 Host bridge: Intel Corporation Mobile 945GSE Express Memory Controller Hub (rev 03) 00:02.0 VGA compatible controller: Intel Corporation Mobile 945GSE Express Integrated Graphics Controller (rev 03) 00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03) And here is the relevant portion of my kernel configuration: CONFIG_AGP=y CONFIG_AGP_INTEL=y CONFIG_VGA_ARB=y CONFIG_VGA_ARB_MAX_GPUS=16 CONFIG_DRM=y CONFIG_DRM_KMS_HELPER=y CONFIG_DRM_I915=y CONFIG_DRM_I915_KMS=y CONFIG_VIDEO_OUTPUT_CONTROL=y CONFIG_FB=y CONFIG_FB_BOOT_VESA_SUPPORT=y CONFIG_FB_CFB_FILLRECT=y CONFIG_FB_CFB_COPYAREA=y CONFIG_FB_CFB_IMAGEBLIT=y CONFIG_FB_MODE_HELPERS=y CONFIG_FB_VESA=y CONFIG_BACKLIGHT_LCD_SUPPORT=y CONFIG_BACKLIGHT_CLASS_DEVICE=y CONFIG_VGA_CONSOLE=y CONFIG_VGACON_SOFT_SCROLLBACK=y CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=640 CONFIG_DUMMY_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y CONFIG_FONT_8x8=y CONFIG_FONT_8x16=y CONFIG_LOGO=y CONFIG_LOGO_LINUX_MONO=y CONFIG_LOGO_LINUX_VGA16=y CONFIG_LOGO_LINUX_CLUT224=y Every other custom/stock kernel boot fine with that Grub2 config. What I would like to have is a single flow of messages on a single console (retaining one screen resolution) from the bootup logo till the login prompt. Does anybody know what I have to tweak to achieve this?

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >