Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 285/1113 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • How to recover a server from a tar file

    - by Mitch
    In moodle the LMS you can export courses, as a tar.gz, some one said they were going to give me such a thing. I was suprised by the 6 gb size. I was even more suprised when I extracted it, and found the root directory to be the root of the server. The person giving me the course instead of exporting must have just tarred the entire server!! How should I go about recovering this? Is there anyway to start this up in a virtual machine? I have a whole linux server, what to do? I could probably just hand pick the data files I need, but how to access a mysql database with out running mysql? I am so stumped!

    Read the article

  • Application base [my path to install here] for host [hostnamehere] does not exist or is not a directory

    - by Hyposaurus
    I am trying to start a new installation of tomcat7 (on arch Linux). I have everything configured how I normally would but I am running into the problem described in the title. This means that tomcat starts but nothing in that host gets deployed. My server and host file: <Host name="localcity" appBase="/home/gary/Sites/localcity/" autoDeploy="true" unpackWARs="false"> </Host> And the directory it is in drwxrwxr-x 4 doug tomcat 4096 Apr 15 11:52 . drwx------ 33 gary users 4096 Apr 15 20:40 .. drwxrwxr-x 2 tomcat tomcat 4096 Apr 15 20:40 localcity drwx------ 2 gary users 4096 Mar 31 10:10 lod It looks like other installations I have, but I am not sure what the problem is.

    Read the article

  • How can I create an external SSL wrapper/tunnel page for an insecure webpage behind a firewall?

    - by Ross Rogers
    I have an security cam with a built-in webpage inside my home network. That camera is using basic HTTP authentication instead of SSL. I want to be able to access the camera's webpage from outside my network, but I don't want to open an unencrypted video stream to the outside world. Right now, I'm doing some cumbersome ssh tunneling where I bounce off an ssh server like: ssh -N -L 9090:CAMERA_IP:80 [email protected] and then I connect to my web page like: http://localhost:9090 But this is a pain. Now, gentle reader, I beseech you to tell me how I can use linux (Ubuntu) to get a fully encrypted SSL connection to my internal web page without the hassle of creating an ssh tunnel each time. I believe I can use stunnel, but I'm not sure of the command.

    Read the article

  • Open source CMS for a university department

    - by Greg Kuperberg
    I realize that this type of question gets asked over and over again. Nonetheless, I want to ask a more specific version. I'm in a university math department. Long ago our sysadmins (or just one at the time) switched to a web content management system. At the time, Zope looked like an informed choice. We have used Zope for years, but at least in my opinion, it has always been a controversial decision. At the time I didn't understand why it was so important to have a web CMS. Now I see that it certainly is important, but I don't know that it should be Zope. The good (even necessary) features of Zope for us are: It's free and Linux-based. It is a true CMS and not something else (e.g. wiki or blog) It lets you write HTML and scripts. What I really don't like about Zope is that the outcome of using it is all-or-nothing in a lot of ways. At least in convenient use, it ends up dividing the enterprise into superusers who can do everything, and lusers who can't do anything (except write their own home pages in plain HTML). It has a huge user manual, which end users won't have time to read. Somehow with the access permissions, the simple thing to do is to let a few admins access all of the source and data and that's it. Since this is a math department, the user base varies from real novices to people who understand computers reasonably well. But as it stands, any change that involves Zope has to go through the sysadmins. When the sysadmins are in a hurry, sometimes they will also just add plain HTML pages to the web site instead of using the Zope framework. It doesn't help matters that Zope is fairly disk-intensive and fairly hype-intensive. Not to dwell on Zope too much, but I am wondering what is the right web CMS for a mixed user base of terminal novices, quick studies, and experienced users. Some users might want intermediate permissions, e.g. read permission but not write permission, or permission to change some subset of the pages or see some subset of the database tables. Also it should be Linux-based and open source and a little bit scalable, and of course widely used and well-supported is a good idea. I might guess that the answer is Drupal just because that was the general answer before, but I don't know if it is the right type of CMS for this purpose. (But note that Python is a relatively popular language in a math department, among other reasons because Sage is based on Python.) I can see that I didn't completely define the question and that people are guessing what type of site it is. It is the UC Davis Math Department. The main structure of the site is not suitable for a wiki and it is also not the same thing as a course environment like Moodle. Rather, the site is mostly structured as a generic medium-small enterprise. Some components of the site could be a wiki, Moodle, LaTeX plugin, Request Tracker, etc. However, the main issue is not these components. The main issue is that it would be better to decentralize management of the site. Right now, everything that is in the Zope CMS has to go through the sysadmins. Every other user in the department either has to put in a request to them, or write their own web pages with no help from Zope. There are two main reasons for this: (1) Other people in the department don't have time to read the Zope manual. (2) It's a hassle to set up intermediate permissions in Zope. However, there are other people in the department who know how to write computer programs and use markup languages. I wouldn't want a solution that assumes that users either can't be trusted with much more than drag-and-drop, or that they are IT professionals who sleep with documentation manuals. I'm wondering if Plone/Zope still has this quality, since certainly Zope by itself does. But I also wonder sometimes if common-sense flexibility is unfashionable these days, and that things in general have be either mindlessly easy or incredibly powerful.

    Read the article

  • Logrotate Successful, original file goes back to original size

    - by drewrockshard
    Has anyone had any issues with logrotate before that causes a log file to get rotated and then go back to the same size it originally was? Here's my findings: Logrotate Script: /var/log/mylogfile.log { rotate 7 daily compress olddir /log_archives missingok notifempty copytruncate } Verbose Output of Logrotate: copying /var/log/mylogfile.log to /log_archives/mylogfile.log.1 truncating /var/log/mylogfile.log compressing log with: /bin/gzip removing old log /log_archives/mylogfile.log.8.gz Log file after truncate happens [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 0 Jan 11 17:32 /var/log/mylogfile.log Literally Seconds Later: [root@server ~]# ls -lh /var/log/mylogfile.log -rw-rw-r-- 1 part1 part1 3.5G Jan 11 17:32 /var/log/mylogfile.log RHEL Version: [root@server ~]# cat /etc/redhat-release Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Logrotate Version: [root@DAA21529WWW370 ~]# rpm -qa | grep logrotate logrotate-3.7.1-10.RHEL4 Few Notes: Service can't be restarted on the fly, so that's why I'm using copytruncate Logs are rotating every night, according to the olddir directory having log files in it from each night.

    Read the article

  • Apache Error Log - "Web Path" instead of Filesystem Path

    - by Craconia
    Hello everyone, I'm running Apache on Linux and I'm using OpenSSH to provide SFTP access to some customers so they can upload their pages and also look at their respective site logs (access & error). I'm using the new feature in OpenSSH to chroot their SFTP access and so far so good. My problem is that on the error_log, every reference for "File not found..." is given using the OS filesystem path as opposed to the "Web" path. I'd rather have the web path on the error log in order not to reveal the OS path. Since I'm already chrooting the users, I don't want to reveal WHERE on the OS their files are actually located... Is it possible to change this behaviour via any directive? I tried looking for it but couldn't find anything :( Thanks, Craconia

    Read the article

  • centos how to install systemtap

    - by Mingfei.hua
    I'm really new to sysmtemtap. just want to install and try systemtap on my lab server. My Linux release version is centos 6.3 and kernel version 2.6.32-279.5.2.el6.i686. I followed some doument, do yum install kernel-devel yum install kernel-debuginfo yum install systemtap all completed without error or warning. but when I try to test systemtap by stap -v -e 'probe vfs.read {printf("read performed\n"); exit()}' I got error Pass 1: parsed user script and 83 library script(s) using 25180virt/14088res/2684shr kb, in 120usr/10sys/161real ms. semantic error: missing i386 kernel/module debuginfo under '/lib/modules/2.6.32-279.5.2.el6.i686/build' while resolving probe point kernel.function("vfs_read")

    Read the article

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • Force local IP traffic to an external interface

    - by calandoa
    I have a machine with several interfaces that I can configure as I want, for instance: eth1: 192.168.1.1 eth2: 192.168.2.2 I would like to be able to forward all the traffic to one of these local address trhough the other interface. For instance, all requests to an iperf, ftp, http server at 192.168.1.1 are not just routed internally, but forwarded through eth2 (and the external network will take care of re-routing the packet to eth1). I tried and looked at several commands, like iptables, ip route, etc... but nothing worked. The closest behavior I could get was done with: ip route change to 192.168.1.1/24 dev eth2 which send all 192.168.1.x on eth2, except for 192.168.1.1 which is still routed internally. The goal of this setup is to do interface driver testing without using two PCs. I am using Linux, but if you know how to do that with Windows, I'll buy it!

    Read the article

  • How to make PuTTY X11 forwarding work in a screen session?

    - by Alex Howell
    I'm using PuTTY with X11 forwarding enabled, using Xming as my X server on Windows 7. When I SSH to a Linux host, X11 forwarding works fine. If I start a "screen" screen manager session, it still works fine. If I disconnect from the screen session, then later resume in a different PuTTY window using "screen -rd", X11 forwarding doesn't work any more - I get an error: xterm X connection to localhost:11.0 broken (explicit kill or server shutdown). This seems to be because $DISPLAY is different in each PuTTY SSH session (localhost:11.0 in the first session, then localhost:12.0 in the next, and so on). If I manually set $DISPLAY to localhost:12.0 in the screen session, X11 forwarding works again. Is there a way to automatically set $DISPLAY in the screen session, each time it's resumed, so that it always matches the parent PuTTY session's?

    Read the article

  • Creating hard drive backup images efficiently

    - by Arrieta
    We are in the process of pruning our directories to recuperate some disk space. The 'algorithm' for the pruning/backup process consists of a list of directories and, for each one of them, a set of rules, e.g. 'compress *.bin', 'move *.blah', 'delete *.crap', 'leave *.important'; these rules change from directory to directory but are well known. The compressed and moved files are stored in a temporary file system, burned onto a blue ray, tested within the blue ray, and, finally, deleted from their original locations. I am doing this in Python (basically a walk statement with a dictionary with the rules for each extension in each folder). Do you recommend a better methodology for pruning file systems? How do you do it? We run on Linux.

    Read the article

  • Can't rename/move files from OSX that were copied from NTFS

    - by 99miles
    Hello- I recently had data recovered and it was sent back to me on what I think is an NTFS drive. I copied all the files over to a file share I have on a Linux box, that's ext4. Now I have that share mounted on my OSX machine, and I can't move or rename most of the files. However, in a couple cases I was able to rename a folder after the third try. Another time I was able to rename a folder once, but not again. All the permissions are showing up the same on the command-line -- I can't see any differences between the permissions on any of the files/folders. Any clues??? Thanks.

    Read the article

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • How do I setup a systemd service to be started by a non root user as a user daemon?

    - by Hans
    I just finished the install and setup process of systemd on my arch-linux system (2012.09.07). I uninstalled initscripts (and removed the configuration files). What I want to do is create a service that can be started and stopped by a non-root user. The service is to start a detached screen session running rtorrent. However I want every user on the system who has set this service to start (enabled) to have a particular instance started for them specifically. How would one go about doing this? I remember reading that systemd supports user instances of services, however I have been unable to find any information on how to set this up, or whether it relates to what I am looking for. Service file that I have used for system: [Unit] Description=rTorrent [Service] Type=forking ExecStart=/usr/bin/screen -d -m -S rtorrent /usr/bin/rtorrent ExecStop=/usr/bin/killall -w -s 2 /usr/bin/rtorrent

    Read the article

  • USB to USB CD ROM emulator

    - by JohnnyLambada
    I'm wondering if anyone knows of a CDROM emulator that runs on Linux. I want to emulate this configuration: [CDROM DRIVE]----USB CABLE----[COMPUTER UNDER TEST] Where [COMPUTER UNDER TEST] is a computer that boots from a physical CD inserted into the [CDROM DRIVE]. Only instead of the [CDROM DRIVE] I want the following configuration: [CD IMAGE BUILD MACHINE]-----USB CABLE-----[COMPUTER UNDER TEST]. I want to build an ISO image on the [CD IMAGE BUILD MACHINE] and have some sort of USB CDROM emulator running on it to serve up the ISO image to the [COMPUTER UNDER TEST] as though it was talking to the [CDROM DRIVE]. Does this exist? If it does, I can't find it. I want to do this so I can test out bootable CDs without burning a lot of coasters.

    Read the article

  • how to install mpgtx from source code

    - by Ahmet vardar
    i am new on linux server. i have mpgtx folder in my root, how can i install it ? in readme file it is written; ./configure && make when i type this i get permission denied error ? thanks EDIT: Here the steps i done root@server [/]# cd /mpgtx root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]# make ----------------------------------------------------------------------------- Hello ! I'm afraid I'm a dummy Makefile. My goal in life is to politely ask you to run the configure script to actual- ly generate a real Makefile. Would you be kind enough to type "./configure --help" to see the options that will suit your needs ? Please note that typing "./configure" without option will generate a Makefile that will suit most people needs. I wish you a good day. Please don't drive to fast. ----------------------------------------------------------------------------- root@server [/mpgtx]# ./configure -bash: ./configure: Permission denied root@server [/mpgtx]#

    Read the article

  • remote desktop to Fedora 20 with xrdp

    - by 5YrsLaterDBA
    I was able to setup xrdp on my Fedora 13 machine and access it from my Windows 7 machine by follow the steps on the first post on this thread It was simple and easy. But when i try the same on my Fefora 20 machine, things are quite different. There is no error message but some new info like these: # chkconfig --levels 35 xrdp on Note: Forwarding request to 'systemctl enable xrdp.service'. # service xrdp start Redirecting to /bin/systemctl start xrdp.service and then I cannot remote it from my window machine. I also did the following based on the last post of above threa: # yum -y install tigervnc-server Any configuration I should do to make xrdp works for me? I was able to ping each other. EDIT: I can access the shared folder on my Windows machine from my Fedora 20. It seems the problem is on the Fedora side. how to know the service on linux is running? The "service --status-all" cannot give me useful information.

    Read the article

  • nc or socat: How to read data from remote:/dev/ttyACM0 ?

    - by AndreasT
    I have a device running at a remote computer on /dev/ttyACM0 Now I want to read that data on my computer. I can connect to it over ssh. Unfortunately I am a nc/socat rookie and no howto covered this. Semantically like this: cat remote:/dev/ttyACM0 The remote system has a limited linux on it, and I can't install packages. (socat is not available there, nc is) Super cool would be to have some forwarded device: local:/dev/ttySOCK0 pointing to remote:/dev/ttyACM0 Thanks for any help.

    Read the article

  • Erase all traces of Windows 8

    - by user1032531
    Just bought a new HP pavilion desktop with Windows 8. I wish to totally remove Windows 8 and all data on the hard drive, remove any windows partitions, delete all data, and then install a fresh Linux. Problem is I can seem to get to boot from USB or boot from CD. It appears that Windows 8 added the following two "features": UEFI which substitutes what we have known as the BIOS Secure Boot which prevents anything but the installed operating system How do I completely and totally erase all traces of Windows 8? Is it still possible to reformat the hard drive? I don't want a duel boot, I don't want to go back to Windows 7, I just want anything Windows gone.

    Read the article

  • How to setup a user account for a web application

    - by ximus
    Hi, What are the main guidelines to setting up a user account on a Linux machine for a web app? In my case it is a Rails application that does file management. First thing I can think of is to limit access rights to only the directories it needs. But how exactly should I go about this? Setup rights through a user group or a through the user's ownership of those directories. I have very little experience in user rights management. What else do I need to consider? I've heard of ACL's and SELinux, do I need to look into any of these to guaranty decent security for my simple web app? Any advice about this and anything not mentioned welcomed, Thanks, Max. I will be using Ubuntu.

    Read the article

  • Samba4 advice for production use

    - by pgb
    I have an old Samba 3 + LDAP server installed that needs to be rebuilt. I'm weighting my options, and Windows Server seems too expensive at the moment, and Samba 4 appeared to be a nice option, coupled with the last Bind 9 that can dynamically add the computers to the DNS. I have about 30 workstations, so I still consider it a small network. My questions are: Is Samba 4 stable enough for production? It seems as if the Samba team is too cautious on when to call their version final, or even beta, as compared with other open source projects. What Linux distribution would you recommend to set it up? I usually use Ubuntu Server, but may use another one if installing / maintaining Samba 4 is better on that one.

    Read the article

  • Tor Browser: how do I restart just the browser?

    - by GDR
    I'm using Tor Browser on Linux from time to time, but I close the browser because it has high memory usage, and leave Vidala running in background to help the network and relay traffic. The problem is, when I want to use Tor Browser again, I have to shut down Vidala and start it again. This takes time and has negative effect on the network. When I execute ./App/Firefox/firefox-bin, the browser starts but says it's not connected via Tor network. Any ideas how to start tor browser and make it connect to existing Vidala instance?

    Read the article

  • Ubuntu 10.4 Lucid Server Minimal Install: Slow terminal scrolling

    - by noname
    I have a minimal install of Ubuntu 10.4 Server for testing and learning purposes. There is a very annoying occurrance: whenever I try to "man dpkg" or any command that load a few screens length of text (eg. "ls -al") the redraw speed of the console is just way too slow. I can see how each new line causes the whole screen to redraw. Note: that this doesn't happen inside X. No gui is installed. I have been experimenting with adding vesafb to the grub line as some guides suggested, but no speedups happened. You might be able to reproduce this behaviour on your linux system by switching to terminal using CTRL+ALT+F1. Is there any way to speed scrolling up?

    Read the article

  • Any way to know if two ip address points to the same machine?

    - by Vivek V K
    Is there anyway to find if two different IP address in two different network actually points to the same physical device? I need it in Linux. Edit - I have the same server(a raspberry pi) connected via 2 intranets to my client. I don't know the IP address of the server as it is DHCP. The crude way to do is to reach the raspberry pi from one intranet and check with ifconfig to find the ipadress of the machine in the other Intranet. I want to know if there is any other way I can do it? I know the mac address of the machine.But I don't know how do I find the Ipadress based on the mac address.

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >