Search Results

Search found 18926 results on 758 pages for 'systems programming'.

Page 639/758 | < Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >

  • Struggling with proper way to setup Permissions on Linux/Apache Web Server

    - by Dr. DOT
    Your expert experience and assistance is great, greatly appreciated here. I have been running a LAMP server for a long time, yet I still struggle with the best way to set file & directory permissions for FTP and WWW protocol activity. My Control panel is WHM/cPanel (not that it makes a difference), and out-of-the box: files are owned by the user account setup in WHM (eg, "abc") files have a group setting of "abc" as well file permissions are created with 644 directories are owned by "abc" directories have a group setting of "abc" directories permissions are created with 0755 Again, these are the default permission settings. Now everything is fine with FTP activity, but please advise me if any of these file/directory settings create issues, especially with security. Here's where my struggle comes into play. I have PHP apps that allow a visitor to create, edit, rename, delete, etc. sub-directories and files in certain selected directories. PHP runs as "nobody" on my server. So in order to get my PHP/Web apps to work, I have had to: chown nobody * chgrp nobody * chmod 0777 * to everything in these certain & selected sub-directories. I know this is probably a huge security whole (so don't ask me for any links :) but how should I set all the permissions to allow my FTP user to do his thing while allowing the PHP apps to do their thing will also "minimizing" any security risks and exposures? I know that big CMS systems like Drupal, Joomla, WordPress and so on, handle this. Thanks ahead of time for reading through this and offering your expert advice!

    Read the article

  • Desktop Provisioning for a Small Linux Software Development Team

    - by deakblue
    Goal: Get a small team using a standard development image rather than 4 software devs setting up their own environments. Why: it takes a day or days to install a distro, build-specific libraries, tools like editors and IDEs, mysql, couchdb, java, maven, python, android-sdk, etc. It's a giant PITA that when repeated 4 times by 4 developers (not sys admins) wastes time and generates annoying divergences that crop up later (it-builds-on-my-box syndrome). There's no sharing of productivity, settings, tricks, scripts, set-ups. Some of this is helped by segregating the build systems into headless virtualbox images. This doesn't really address tooling though or the GUI-desktop dev that needs doing. So I see three basic strategies, ghosting, virtualization, and finally creating a kind of in-house linux distro (I guess Google does something like this). The target dev environment is based on Debian OpenBox and must allow a mix of 3rd gen Core i7 notebooks 8GB-minimum to work both single and multihead. Important, the lappies are not the same, but a mix of 2012 macbooks and PCs. So: virtualization: is doing all of your work within a VM, like VirtualBox, practical on this hardware or annoying. ghosting: will laptops from different manufacturers make this impractical. DIY distro: short of scripting a bunch of package installs, I don't know if there's any "distro-maker" that could keep this from being an epic project of scripting package installs. So any advice?

    Read the article

  • USB Harddisk not working on dual boot windows7/8

    - by Jesper
    Yesterday I installed Windows 8 on a machine that already had Windows 7. They are on dual boot and both systems work fine. The problem is that inserting a USB hard disk in either system does nothing. If I connect a USB mouse or mobile phone, they work fine, so the USB plugs are active/working and the USB hard drives that I am trying to connect work on my other laptop just fine. I have tried to uninstall all USB-related items in Device Manager and let them reinstall upon restart, but that didn't help. The USB drive does not show up in disk management either. The strange thing is that it is exactly the same situation on both windows. USB mice etc. work just fine and USB hard drives do not. Any ideas on solving this problem would be great. ...Don't know if it is important, but this is a Toshiba Tecra R950 Laptop. EDIT: I have found out that my other USB HD (Western Digital) works on this laptop, but for my StoreJet Transcend and Adata "something" does not work. All three work on another Windows 7 laptop. Sizewise the WD is in the middle at 400 GB. The StoreJet is 640 GB and the Adata is 200 GB.

    Read the article

  • Is it possible for a faulty processor to cause audio static/noise?

    - by Tom
    I have a Core 2 Extreme processor I received from a friend and have set up an XBMC box using it. However, I constantly get audio static whenever playing any music or videos. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 I have tried replacing everything short of the case and the processor, including cables, audio interfaces, operating systems, ram, etc, leading me to think it might be either the case shorting out the motherboards I have tried or a faulty processor. Is it possible for a faulty processor to cause audio static/noise? Any feedback would be appreciated. Edit - Here's a list of things I have tried: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Switching Power Cable Plugging in through surge protector Plugging into different outlet on separate circuit

    Read the article

  • How to encourage Windows administrators to pick up scripting?

    - by icelava
    When I worked as an administrator in my first job, I was frustrated that our administration processes with Windows servers were a series of point-and-clicks; we could never match the level of efficiency with the Unix servers which had a group of shell scripts to automate a lot of the work. I soon read about WSH and ADSI and wasted no time learning just how much automation I was able to achieve with scripting. There was a huge problem though - almost none of my Windows colleagues were really interested in learning scripting. They seemed happy with the manually mouse-clicking chores and were never excited at the prospect of using scripts to do the work on their behalf. I struggled to convince them to pick up scripting skills despite the evident increases in efficiency. I left that job in pursuit of a full-time software development career thereafter. Almost a decade on working in various environments and different customers, I still encounter Windows administrators mainly possessing this general "mood" where they would avoid scripting as much as possible. Despite the increasing level of accessibility Windows server technologies are opening up for scripting and automation. I am almost certain the majority of administrators are administrators precisely because they absolutely hate performing any kind of programming duties. What are some means to encourage and motivate administrators that scripting can really help them in the long run?

    Read the article

  • Legalities of freelance security consultant (SQLi) [closed]

    - by Seidr
    Over the years I've gained a large amount of experience in Programming (my main occupation) and server admin, and as a result have a fairly decent backing in security practices. I'm also pretty good at spotting security flaws in software (including but not limited to SQLi), and have built up a list of sites that could definately use some looking at. My question is, what are the legalities of me contacting these sites saying something along the lines of "I've looked at your site and it appears vulnerable - customer data could be compromoised - would you like me to fix it?". Could me finding out that the site is infact vulnerable be construed as an attack itself? If the prospective client so wished, could they take me to court over this? When I find a vulnerable site, all I do is confirm and make a note of the vulnerability. I'm not in it for personal gain (getting paid for FIXING it would be nice!), just curiosity. Is this a viable way to go about finding clients for this kind of work, or would you recommend a more 'legitimate' way? Any suggestions/advice would be greatly appreciated :)

    Read the article

  • Kindly guide me to buy a new laptop [on hold]

    - by Its me 007
    I am from India. I want to buy a new laptop. Shortlisted few but confused between which processor,Chip set and Graphics will be the best suited for my requirements. NOTE: NOT ABLE TO POST THE LINKS YOU WILL HAVE TO COPY PASTE IT. SORRY. 1) HP Pavilion 15-N004TX - 4th Gen CI5 - 4200U/4GB RAM/500 GB HDD/ 1GB Radeon Graphic - Rs 39990 www.homeshop18.com/hp-pavilion-15-n004tx-laptop-4th-gen-intel-core-i5-4200u-4gb-500gb-15-6-linux-silver-black/computers-tablets/laptops/product:30989197/cid:16317/ 2) Lenovo Essential G510 (59-398452) - 4th Gen Ci5 4200M/ 4GB/ 500 GB/Win8/2GB Graph ATI Sunpro 8570 - Rs 44969 www.flipkart.com/lenovo-essential-g510-59-398452-laptop-4th-gen-ci5-4gb-500gb-win8-2gb-graph/p/itmdp26eprwf5k5v?gclid=CMnh99GA2LoCFaRU4godNiUAGQ&semcmpid=sem_7847244212_laptopsnew_goog&tgi=sem%2C1%2CG%2C7847244212%2Cg%2Csearch%2C%2C24387103114%2C1t1%2Cb%2C%2Blenovo+%2Bg510%2F59+%2B398452%2Cc%2C%2C%2C%2C%2C%2C%2C2 3) HP Pavilion G6-2303TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 500GB/ DOS/ 1GB Graph) - Rs 40500 www.flipkart.com/hp-pavilion-g6-2303tx-laptop-3rd-gen-ci5-4gb-500gb-dos-1gb-graph/p/itmdm6yzh4gr4cxd?pid=COMDM6YHWMGDRDEZ&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 4) HP Pavilion 15-E039TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 1TB/ Win8/ 2GB Graph) - Rs 46690 www.flipkart.com/hp-pavilion-15-e039tx-laptop-3rd-gen-ci5-4gb-1tb-win8-2gb-graph/p/itmdn4d9wykhdcpz?pid=COMDN4CZGFMGJNTN&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 Now I am confused between: Which Processor and chipset is best? How much graphic card is enough? (Not a gamer) Is any of this laptop future proof i.e. it should at least support upcoming latest programming softwares which eats more processor and memory. Laptop will be mainly used for multiprocessing.It should be at least capable for following: Visual Studio 2012 and the upcoming versions for at least 4 years SQL server 2008 R2 and above Sharepoint Blend Photoshop Kindly suggest. If anyone know any good laptop with good configuration in the 50k budget kindly suggest. Thanks in advance.

    Read the article

  • Remote mouse pointer not visible in VNC

    - by aef
    I used VNC desktops as a kind of collaboration server, as shared planning and pair programming environment for a long time. Now my latest iteration uses a KVM guest running Fedora 17 "Beefy Miracle", the Cinnamon desktop environment and an X11VNC server. The X11VNC server is automatically started with the desktop environment using the following command: x11vnc -localhost -many -shared -display :0 -bg My problem is that depending on the VNC client, the mouse pointer of the remote system which is shown through VNC is not synchronized to my client. I really need this, so I can see what my partner is doing on the desktop. When using Vinagre 3.2.1 on Ubuntu Oneiric Ocelot (11.10) or Vinagre 2.3.0.3 on Debian Squeeze (6.0) and I don't have my local mouse pointer inside the VNC view, I cannot see the mouse pointer of my remote system, nor its movement. When using TightVNC on Windows 7, I can recognize a mouse pointer trace for very short amounts of time after moving the mouse, but it is not clearly visible. Using UltraVNC on Windows 7 the mouse pointer is clearly visible all the time. With Gnome 2 I never had any problems with remote pointer synchronization, using exactly the same clients. I suspect this could have something to do with Cinnamon's dependency on 3D acceleration. On the other hand, it doesn't change anything to start Cinnamon's fallback environment Cinnamon 2D. Update: Same effect when I use Gnome 3.

    Read the article

  • Execute remote shell commands on windows XP embedded

    - by BartD
    The following situation: We have Windows XP Embedded clients that have all admin shares disabled and only have read-only shares (for security reasons). What we want to do is run remote shell (dos) commands on these machines. At first we looked at PsExec & BeyondExec applications (and all sorts of variants), but all of them rely on having at least an admin$ share, which are disabled on our systems. Telnet is not secure enough, as is RSHD servers. So we looked at the next obvious solution: and SSH server. We also prefer an open-source or freeware solution that is still maintained. I looked at freeSSH server for Windows, but that didn't run stable, I tried installing copSSH, WinSSH & openSSH for Windows, but none of these applications seem to work on Windows XP Embedded. The services can either not be installed or cannot be started. I don't know why. Some kind of dependency that is missing. So are there any other solutions out there? I don't care about having to an agent installation locally of some kind on each system, as long as the size of the software is small enough. Can someone suggest some alternatives to what I've already mentioned? Thank you very much.

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • How can one use online backup with large amounts of static data?

    - by Billy ONeal
    I'd like to setup an offsite backup solution for about 500GB of data that's currently stored between my various machines. I don't care about data retention rates, as this is only a backup of, not primary storage, for my data. If the backup is stored on crappy non-redundant systems, that does not matter. The data set is almost entirely static, and mostly consists of things like installers for Visual Studio, and installer disk images for all of my games. I have found two services which meet most of this: Mozy Carbonite However, both services impose low bandwidth caps, on the order of 50kb/s, which prevent me from backing up a dataset of this size effectively (somewhere on the order of 6 weeks), despite the fact that I get multiple MB/s upload speeds everywhere else from this location. Carbonite has the additional problem that it tries to ignore pretty much every file in my backup set by default, because the files are mostly iso files and vmdk files, which aren't backed up by default. There are other services such as EC2 which don't have such bandwidth caps, but such services are typically stored in highly redundant servers, and therefore cost on the order of 10 cents/gb/month, which is insanely expensive for storage of this kind of data set. (At $50/month I could build my own NAS to hold the data which would pay for itself after ~2-3 months) (To be fair, they're offering quite a bit more service than I'm looking for at that price, such as offering public HTTP access to the data) Does anything exist meeting those requirements or am I basically hosed?

    Read the article

  • SQL cluster instance names for large project

    - by Sam
    We're setting up two clusters. One dev and one prod. The Production will host two SQL instances - a OLTP and a DW. The development will host 4 OLTP non-production environments and at least one DW non-production. We're working on getting more DW non-prods and possibly more OLTP systems. I'm considering a naming scheme like this, where PROJ would be 3 initials for the project name. Dev Cluster MSSQLPROJD1\D1 (DEV) MSSQLPROJD2\D2 (TEST) MSSQLPROJD3\D3 (QA) MSSQLPROJD4\D4 (STAGE) MSSQLPROJD5\D5 (DW) Prd Cluster MSSQLPROJP1\P1 (PRD) MSSQLPROJP2\P2 (DW) To the left of the slash, each name must be unique network wide. On each server, the instance name, to the right of the slash, must be unique. Any thoughts on this? I'm trying to avoid having instance names drifting from reality as the project progresses - say we change what we call a certain environment or want to repurpose one. Then we can update a listing of the purposes for the instances and be done with it. How has a scheme like this worked out for you? Maybe you do things another way in your shop - tell me about it. Thanks.

    Read the article

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Downmix surround to Dolby Pro-Logic at the OS/driver level in Windows 7?

    - by davr
    First off, I'm talking about Dolby Pro-Logic, a really old tech for encoding 4 audio channels (L/R/C/SR) into two analog outputs, and then extracting them again. It was used in surround sound systems in the last century. I have a modern PC that can output 5.1 analog audio (Three outputs on the back carry six channels of audio). But I have a really old surround sound reciever that only has a two-channel, L/R input, which it extracts 4 channels of audio from, and outputs to 5.1 speakers. What I want is some way for the OS, Windows 7, to act as if I really had 5.1 audio channels available, so applications produce surround audio, but before outputting it out of the back of my PC, apply Dolby Pro-Logic matrix encoding so that it outputs over only two channels. These two channels would then get sent to my receiver via a RCA cable, which would decode it again and drive the surround speakers. Is anything like this possible? I'm pretty sure I could do it at an application / codec level, but I'm looking for something that I just have to set once.

    Read the article

  • Multiple munin-nodes per machine

    - by Alexander T
    I'm collecting statistics remotely through JMX. The munin JMX plugin allows you to select an URL to connect to when aggregating statistics. This allows me to collect statistics from hosts which do not actually have munin-node installed. I find this a desirable property for some systems where I am hindered to install munin-node. How I work today is that if i want to collect JMX stats from machine A without munin-node, I install munin-node on machine B. Machine B then collects data from A via JMX, and reports it to munin-server, which runs on machine C. This setup requires multiple B-type machines: one per C-type machine. I would like to avoid this and instead use only one B-type machine to collect the data from all A-type machines and reports it to the only munin-server (C-type machine). As far as I understand this requires running multiple munin-nodes on B or in some other way report to munin-server that the B-type machine is reporting data from multiple sources. Is this possible? Thank you.

    Read the article

  • Make UEFI, GPT, Bootloader, SSD, USB, Linux and Windows work together

    - by user129552
    I like to use the latest hardware and the latest software; thus I have a Laptop (Lenovo X220) with UEFI instead of BIOS an SSD instead of an HDD GPT partitioning scheme instead of MBR USB to boot from instead of optical disks. I need to use both Windows and Linux. I tried to make them work alongside, but I didn't succeed. Most Linux distribution isos don't even really work on UEFI systems booted from USB. (Not even the self-claimed cutting-edge Fedora. I also tried Linux Mint Debian Edition and Sabayon Linux (according to this guide) which did not work. Only Ubuntu worked for me. I first installed Windows 8 which created sda1: Recovery, sda2: EFI system, sda3: msftres, sda4: NTFS Windows. Windows worked without a problem. I then created sda5: linux-swap and installed Ubuntu into sda6: btrfs. After rebooting, I was not presented GRUB2 as expected, but instead my system just booted into Ubuntu. I could no longer access Windows. After fixing dpkg in btrfs Ubuntu, I followed the Ubuntu documentation on UEFI booting. The result left me with a broken GRUB2, but interestingly, when I wanted to select the device to boot from, I was not only presented the internal SSD, an attached USB device, or LAN, but also Grub2 (broken), Ubuntu and Windows. The result is not very satisfying to me. What would I have to do to fix everything? Or differently asked, what operating system should I install at what point given my possibilities and requirements, so that I have a working bootloader in my UEFI GPT system which presents me a working Linux and Windows.

    Read the article

  • OpenVPN-based VPN server on same system it's "protecting": feasible?

    - by Johnny Utahh
    Scenario: hosted machine (typically a VPS) serving wiki, svn, git, forums, email lists (eg: GNU mailman), Bugzilla (etc) privately to < 20 people. People not on team not allowed access. Seeking VPN-restricted access to said server. Have good user experience with OpenVPN-based servers/clients, but have yet to server-admin such systems. Otherwise, experienced Linux sysadmin. Target system: Ubuntu, probably 12.04. Seeking to put an OpenVPN process on above server to "protect" all the above-mentioned services, enabling only OpenVPN-authorized clients/processes to access above services. (Can easily acquire additional IP address(es) as needed for this setup.) Option: if absolutely needed, can employ an additional, dedicated, "VPN server" VPS simply to be my VPN server "front end." But prefer to have all server processes (VPN server plus other server apps) all running on same machine, if possible. Will consider further if dedicated-VPN-machine setup enables 1. easier installation/administration, 2. better/easier end-user experience, and/or 3. makes system significantly more secure. Any of above feasible? The main intention: create a VPN from purely-hosted resources, and not spend all the effort to make a non-VPN, secure site--which typically means "SSL wrapping" + all the continual webserver-application-update management. Let the VPN server deal with access security, and spend list time pushing said security "down" in the other apps/Apache.

    Read the article

  • I need a reverse proxy solution for SSH

    - by Bond
    Hi here is a situation I have a server in a corporate data center for a project. I have an SSH access to this machine at port 22.There are some virtual machines running on this server and then at the back of every thing many other Operating systems are working. Now Since I am behind the data centers firewall my supervisor asked me if I can do some thing by which I can give many people on Internet access to these virtual machines directly. I know if I were allowed to get traffic on port other than 22 then I can do a port forwarding. But since I am not allowed this so what can be a solution in this case. The people who would like to connect might be complete idiots.Who may be happy just by opening putty at their machines or may be even filezilla.I have configured an Apache Reverse Proxy for redirecting the Internet traffic to the virtual machines on these hosts.But I am not clear as for SSH what can I do.So is there some thing equivalent to an Apache Reverse Proxy which can do similar work for SSH in this situation. I do not have firewall in my hands or any port other than 22 open and in fact even if I request they wont allow to open.2 times SSH is not some thing that my supervisor wants.

    Read the article

  • Does my Oracle DBA need root access?

    - by Dr I
    I'm currently discussing with my Oracle DBA Collegue that request a root access on our production servers. I'm not so hot to let him use the root access on our production servers. He is arguing that he need it to perform some operations like restarting the server and some other obscure arguments. The point is that I'm not agree with him because I've set him a Oracle user/group and a dba group where Oracle user belong. Everything is running smoothy and without any root permissions for now. I also think that all administrative tasks like scheduled server restart and so one need to be operated by the proper administrator (The Systems administrator on our case) to avoid any kind of issues related to a misunderstanding of the infrastructure interactions. So, I need the help of both, sysadmins and Oracle DBAs to lead me on the correct direction. If my collegue really need this rights I'll give him, but I'm just basically quite affraid of that because of security and integrity concerns. I know that my collegue is really good as a Oracle DBA and he know is work very well, but I also know that I've very few cases where a software and its admin really need root access. Once again, I'm not looking for pros/cons but rather an advice on the way that I should take to deal with this situation.

    Read the article

  • Installing Linux on a Windows 8.1 laptop

    - by nicoX
    I would like to clean install a linux distribution as Ubuntu etc. My laptop that runs Windows 8.1. I have two options in mind. Clean install or dual boot. My technical question is: my laptop have a 8GB SSD drive, which it uses to boot Windows with and a 500GB for storage. I wonder what that 8GB SSD stores? It can't store the whole Windows install as that would be much more than 8GB. Also if I would do a clean install of Ubuntu could I use the 8GB SSD to have Ubuntu boot up quicker. How would I install it. Option two, if I would like to dual boot, how would I proceed having the SSD to boot both systems? I also wish to ask about the Legacy and UEFI differences. Windows runs with UEFI. So when I'm installing Linux, should I run Legacy, and if I dual boot, what option to I choose?

    Read the article

  • How to run Fujitsu P27T-7 LED monitor in its not native resolution and have perfect fonts rendering

    - by Ilia Rostovtsev
    My problem is completely opposite to anything I could find as I need to run my monitor in its NOT native resolution and have perfect font rendering. I recently got myself Ultra HD 2560x1440 27 inch monitor (Fujitsu P27T-7 LED) and I have an issue with this. I would call it personal but I'm afraid it's not as few people already agreed with me. I do programming and the text on UHD is way to small for comfortable usage. I changed the resolution to regular Full HD (1920x1080), it became just right but the text is looking slightly blur now, in comparison to both its natural UHD resolution and/or to my old 23 inch NEC. I am pretty frustrated and not sure what to do and how to make fonts look just as sleek as they should? I can't work in UHD resolution (my vision is 100% perfect), simply if calculated, picture size with Ultra HD (2560x1440) on 27 inch is around 30% smaller than Full HD (1920x1080) on 23 inch. In order to have same font size, if compared with Full HD 23 inch, 27 inch Ultra HD monitor must be around 32 inches in size. If I set my new monitor to regular Full HD 1920x1080, then the fonts' size are just perfect but the quality is not as it's blurry? Could anyone please help me out with an advise of how to solve this problem? Spec: nVidia 560 Ti with DVI-D port on Fedora 20. EDIT 1: Changing fonts doesn't really help as everything else doesn't look the way it should. EDIT 2: The monitor is buzzing on 2560x1440 so badly in case there are lots of lines on the screen, like file listing. If I type ls /usr/bin it makes such nasty irritating sound. When resolution goes to 1920x1080 it's a bit better. Any idea why?

    Read the article

  • IP6 seems to be enabled - How do I configure it without interfering with IP4?

    - by Mister IT Guru
    I noticed that some of my Centos boxes have IP6 enabled, and seem to have addresses. I have no problem with this, but I would like to get a handle on it, and even connect to them using IP6. This would really help if for any reason DHCP has a hiccup. But I'm a bit lost as to where the configuration on my CentOS box is. (I am also on google researching this, but I like server fault! :) ) I am hoping that I would be able to log into this via the VPN because every now and then that DHCP device has a bad morning, and needs to be restarted. (I'm also looking into this issue, but someone else handles that, management separation gone mad!) It's a remote client, so it would be a lot easier for me to connect to these systems which seem to self configure, to use that as a pivot via ssh tunnels to get to other remote devices to continue to manage them, while out main route is fixed. I guess, my questions are How can I configure IP6 without interfering with IP4, and On CentOS, can I influence this auto configuration I seem to be seeing?

    Read the article

  • How do I replicate Gmail filtering (forwarding mostly)?

    - by projectdp
    I have reached the limits of Gmail forwarding. Before there was no need to verify forwarding addresses. It's a problem for me now because the addresses I want to forward to are not natural inboxes but automated systems with no way to track the verification email contents. I want to set this up for example: mobile - email - facebook-email - flickr-email - tumblr-email - posterous-email How do I do this without Gmail filters? I think I need to use fetchmail to watch my inbox and then autoforward to the above addresses. Is fetchmail the best solution to this issue? Any other MRA's? I'd like to do some more complicated things with the emails in an automated fashion too, how would I go about monitoring the inbox, doing some actions to the email before forwarding, and forward everywhere? prerequisites: a server: fetchmail daemon to poll the account local mailbox script to clean & forward appropriately (python probably) sendmail + ~/.forward file backup email account (Gmail probably) Any help would be greatly appreciated. I'm trying to automate my social content distribution.

    Read the article

  • Partitioning & Linux

    - by Zac
    Every tutorial on Linux-based partitioning schemes (or, just partitioning in general) will tell you that a PC can have either 4 primary partitions, or 3 primaries and 1 extended. They will all also tell you that Linux (in my case, Ubuntu) can be installed on either. It's also come to my attention that it is not too atypical for FHS directories, such as usr/, tmp/, etc/, home/ or var/ to be mounted separately on other partitions. Several questions I am unable to find the answers to, purely for my own edification: (1) By "PC", are we really talking about common PC disk types, like IDE or SATA? I guess I'm wondering why PC uses are limited to 4 primaries or 3 primaries + 1 extended (2) I'm choking on some basic OS concepts: it is said that a partition can be mounted by a file system or an OS. So I assume this means I can somehow instruct Ubuntu to mount to 1 partition, and then any part of, say, ReiserFS, to be mounted to another partition? How? (3)(a) What about creating swap partitions? Is there too much of a good thing with swap partitioning? If I have 4GB RAM over 320GB disk, what should my swap partition size be, and why? (3)(b) Are swap files the only way to create swap partitions? Wouldn't a Linux partitioning utility allow me to define a partition as being for virtual memory only? (4) Why are partitions limited to being "mounted" by just OSes and file systems? Why couldn't I write a program to take up its own, say, 512 MB partition, and then have it invoked or uses by an OS installed on another partition? Thanks for shedding any light here... not critical that I know this stuff, but it's got me thinking incessantly. And when I think incessantly, I...can't......sleep....

    Read the article

< Previous Page | 635 636 637 638 639 640 641 642 643 644 645 646  | Next Page >