Search Results

Search found 38288 results on 1532 pages for 'oracle linux partners'.

Page 671/1532 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • RHEL5 php5-curl install fail.

    - by The Rook
    PHP's curl bindings are nowhere to be found in yum. By looking in the yum.repos.d I can see that rpmforge is being used. Build from source? phpize isn't installed and it isn't in yum. What do i do? How do i repair the repo? This is RHEL5 machine that is i686.

    Read the article

  • sSMTP Configuration Question

    - by SevenCentral
    I've installed sSMTP on Ubuntu 10.04 via: sudo apt-get install ssmtp My configuration file is: # # Config file for sSMTP sendmail # # The person who gets all mail for userids < 1000 # Make this empty to disable rewriting. [email protected] # The place where the mail goes. The actual machine name is required no # MX records are consulted. Commonly mailhosts are named mail.domain.com mailhub=smtp.gmail.com:587 # Where will the mail seem to come from? #rewriteDomain= # The full hostname hostname=somedomain.com # Are users allowed to set their own From: address? # YES - Allow the user to specify their own From: address # NO - Use the system generated From: address #FromLineOverride=YES [email protected] authpass=**** usestarttls=yes Am I transmitting my credentials in clear text? Is calling ssmtp a secure operation? Thanks.

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Enable gzip on Nginx

    - by Rob Wilkerson
    Yes, I know that there are a lot of other questions that seem exactly like this out there. I think I must've looked all of them. Twice. In desparation, I'm adding another in case my specific configuration is the issue. Bear with me. First, the question: What do I need to do to get gzip compression to work? I have an Ubuntu 12.04 server installed running nginx 1.1.19. Nginx was installed with the following packages: nginx nginx-common nginx-full The http block of my nginx.conf looks like this: http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "msie6"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Both PageSpeed and YSlow are reporting that I need to enable compression. I can see that the request headers indicate Accept-Encoding:gzip,deflate,sdch, but the response headers do not have the corollary Content-Encoding header. I've tried various other config values (gzip_vary on, gzip_http_version 1.0, etc.), but no joy. As far as I know, I can only assume that nginx was compiled with compression support, but if there's any way to verify that, I'd love to know. If anyone sees anything I'm missing or can suggest further debugging, please let me know. I'm no sysadmin and I'm new to Nginx so I've exhausted everything I can think of or have read. Thanks.

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • Trixbox CentOS Default GW Problem (Multi-homed server)

    - by slashp
    I'm having an issue with a CentOS trixbox server which is dual-homed (one private facing NIC [eth1], one internet-facing NIC [eth0]). I can't seem to get the default gateway to set properly to our ISP's GW via eth0. I've modified the /etc/sysconfig/network to contain both a GATEWAY & GATEWAYDEV line and removed the GATEWAY line from /etc/sysconfig/network-scripts/ifcfg-eth1 (as well as /etc/sysconfig/network-scripts/ifcfg-eth0). No default GW shows up in the routing table unless it's specified in the ifcfg-eth1 file (which both the wrong interface and wrong gateway IP), otherwise, the routing table simply does not contain a default gateway..any ideas would be greatly appreciated! Thanks! EDIT Just realized when attempting to add the default gateway manually using the route add command, I receive an error stating: SIOCADDRT: Network is unreachable I know this error can occur when your default gateway and interface IP address are not on the same subnet..in this case, my public IP address of eth0 is a /29.

    Read the article

  • ExaLogic 2.01 Implementations– partner resource kit & training material

    - by JuergenKress
    Are you working on ExaLogic 2.01 Implementations? Let us know, we are happy to support you! Please make sure that you contact us for dedicated technical support. Additional we added new material to the ExaLogic wiki page. Benefits of deploying Oracle e-Business Suite on Exalogic and Exadata.pdf Exalogic-security-1561688.pdf Oracle Exalogic Elastic Cloud Satement of direction.pdf (Oracle and partner confidential) ExaLogic 2.01 Training material For all material, please visit the WebLogic Community Workspace (WebLogic Community membership required). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: ExaLogic,ExaLogic 2.01,ExaLogic kit,ExaLogic trianing,enablement,education,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • How to install/change locale on Debian?

    - by Hongli Lai
    I've written a web application for which the user interface is in Dutch. I use the system's date and time routines to format date strings in the application. However, the date strings that the system formats are in English but I want them in Dutch, so I need to set the system's locale. How do I do that on Debian? I tried setting LC_ALL=nl_NL but it doesn't seem to have any effect: $ date Sat Aug 15 14:31:31 UTC 2009 $ LC_ALL=nl_NL date Sat Aug 15 14:31:36 UTC 2009 I remember that setting LC_ALL on my Ubuntu desktop system works fine. Do I need to install extra packages to make this work, or am I doing it entirely wrong?

    Read the article

  • ubuntu input/output error

    - by rplevy
    I'm having a problem with Ubuntu that I'm finding hard to troubleshoot for reasons that will become clear: reboot -bash: /sbin/reboot: Input/output error dmesg -bash: /bin/dmesg: Input/output error ps -e ps: error while loading shared libraries: /lib/libproc-3.2.8.so: cannot read file data: Input/output error lsof -bash: /usr/bin/lsof: Input/output error fsck -bash: /sbin/fsck: Input/output error badblocks -bash: /sbin/badblocks: Input/output error So I can't see what is going on, and I can't remotely reboot. What can I do to get to the bottom of this? Interestingly: init 0 Segmentation fault I can cat /var/syslog but not /var/log/messages or several other important files. less and more don't work, neither do tail or head, etc.

    Read the article

  • Full HD video playback acceleration with mplayer on Ubuntu Lucid

    - by pts
    I know that for an NVidia card I can sudo apt-get install nvidia-current mplayer, reboot, and then use mplayer -vo vdpau -vc ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau FILE.mkv to get accelerated video playback of H.264 and other codecs, so even full HD videos can be played back with only little CPU. (And there are many other options, e.g. XBMC also supports VDPAU.) But how do I get accelerated video playback if I have a recent ATI or Intel video card on Ubuntu Lucid? How do I figure out if my video card has acceleration built in? The solution has to work with mplayer or mplayer2. It's OK for me to recompile mplayer(2), but I'd prefer installing both the kernel and the X.org X server from a binary package repository.

    Read the article

  • OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance?

    - by Varun
    Question: OBI already has a caching mechanism in presentation layer and BI server layer. How is the new in-memory caching better for performance? Answer: OBI Caching only speeds up what has been seen before. An In-memory data structure generated by the summary advisor is optimized to provide maximum value by accounting for the expected broad usage and drilldowns. It is possible to adapt the in-memory data to seasonality by running the summary advisor on specific workloads. Moreover, the in-memory data is created in an analytic database providing maximum performance for the large amount of memory available.

    Read the article

  • Centos Server/MySQL server problem

    - by Jake
    Hello all, I currently run a website we get about 15,000-20,000 hits a day. We currently run a very active forum, that is hosted using Vbulletin software. We have 4.5 Million Posts, 80,000 Threads, with about 11,000 members of which just under a third is active all the time. Now I am running a Intel Xeon Quad Core (2.13Ghz) with 4GB of RAM, Centos 5.5 and running DirectAdmin on the box to manage it. I also run the current stable version of Apache, MySQL, and php. This is the only site that is hosted on this machine. Now during random times of day sometimes when it gets busy the server load can get to like 20, but this can also happen when we only have like 200 users active too. I dont understand what is causing these problems. Sometimes I get pages that can generate in .2 seconds other times it takes like 5-8 seconds. I have customized the my.cnf file and that has not helped out anything, I didnt know where else to turn so if anyone has any suggestions please let me know. Thank You In advance.

    Read the article

  • Installing Apache to CentOs 5.7 (problems with repo)

    - by C.S.Putra
    I'm installing Apache on CentOS 5.7, I followed instructions here : http://www.if-not-true-then-false.com/2010/install-apache-php-on-fedora-centos-red-hat-rhel/ I've installed this also : Remi Dependency on CentOS 5 and Red Hat (RHEL) 5 ## rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm When I install this, there's warning: /var/tmp/rpm-xfer.Bqu2xo: Header V3 DSA signature: NOKEY, key ID 217521f6 But it says that the package is alread installed. Then I move on to 3rd steps: yum --enablerepo=remi install httpd php php-common But it says: error getting repository data for remi, repository not found. Why is it like that?

    Read the article

  • refresh screen rate ubuntu

    - by user24224
    Hello all I am having problems with the refresh rate if the screen . In the the refresh mode of the monitor in the monitor options have only one option 60Hz. I have LG 24 + ATI Radon 3870. And already installed the ati driver via ubuntu download centre. Any idea how i solve that one ? Thanks.

    Read the article

  • Webcast Series Part II: Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif";} Register today for the second part of this webcast series on Thursday, November 29, 2012 10:00 a.m. PT/ 1:00 p.m. ET Project Portfolio Management solutions have immediate and lasting impact o both Provider’s and Contractor’s bottom lines by helping to manage the costs and risks of healthcare infrastructure projects from planning through handing-over and operating. During this Webcast, Integrated Infrastructure and Lifecycle Solutions for Capital Assets - A New Delivery Model, Garrett Harley and Thomas Koulouris will continue their discussion on Healthcare Infrastructure strategy changes and will cover the following topics: The shift in the Healthcare infrastructure strategy and how it will impact providers and contractors The Integrated Infrastructure & Lifecycle Solutions for Capital Assets and how these solutions help your business Communication and integration between providers and contractors and why it is so important to your bottom line The new integrated delivery system in Healthcare infrastructure and how Project Portfolio Management is so critical to the success of that system.

    Read the article

  • How can I make monodevelop render text in KDE?

    - by Spikolynn
    Monodevelop from git in KDE 4.10.2 does not render text in code edit tabs I tried with xfce and text is rendered ok there. I tried disabling composition with alt shift f12 and restarting x server but it was no better. I also tried disabling font softening in monodevelop options and disabling plugins. I also tried temporarily deleting my KDE profile. This is dual screen setup on Nvidia with nouveau. OS is slackware64-current.

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Install grub on 2nd hard drive

    - by jldupont
    I have 2 HDs in my machine: Drive 1 with grub and my Windows XP OS Drive 2 with only Ubuntu 9.04 I would like to be able to boot directly from drive 2. I am missing grub on drive 2... how do I add it? EDIT: I ended up reinstalling the whole OS.

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • Airline mess - what a journey

    - by Mike Dietrich
    What a day, what a journey ... Flew this noon from Munich to Zuerich for catch my ongoing flight to San Francisco with Swiss. And that day did start very well as Lufthansa messed up the connection flight by 42 minutes for a 35 minute flight. And as I was obviously the only passenger connection to San Francisco nobody picked me up at the airplane to bring me directly to my connection as Swiss did for the 8 passengers connection to Miami. So I missed my flight. What a start - and many thanks to Lufthansa. I was not the only one missing a connection as Lufthansa/Swiss had canceled the flight before due to "technical problems". In Zuerich Swiss did rebook me via Frankfurt with Lufthansa to board a United Airlines flight to San Francisco. "Ouch" I thought. I had my share of experience with United already as they've messed up my luggage on the way to San Francisco some years ago and it took them five (!!!) days to fly my bag over and deliver it. But actually it was the only option today. So I said "Yes". A big mistake as I've learned later on. The Frankfurt flight was delayed as well "due to a late incoming aircraft". But there was plenty of time. And I went to the Swiss counter at the gate and let them check if my baggage is on that flight to Frankfurt. They've said "Yes". Boarding the plane with a delay of 45 minutes (the typical Lufthansa delay these days) I spotted my Rimowa trolley right next to the plane on the airfield. So I was sure that it will be send to Frankfurt. In Frankfurt I went to the United counter once it did open - had to go through the passport check they do for US flights as well - and they've said "Yes, your luggage is with us". Well ... Arriving in San Francisco with just a bit of a some minutes delay and a very fast immigration procedure I saw the first bags with Priority tags getting pushed to the baggage claim - but mine was not there. I did wait ... and wait ... and wait. Well, thanks United, you did it again!!! I flew twice in the past years United Airlines - and in both cases they've messed up my luggage on the way to San Francisco. How lovely is that ... Now the real fun started again as the lady at the "Lost and Found" counter for luggage spotted my luggage in her system in Zuerich - and told me it's supposed to be sent with LH1191 to Frankfurt on Sept 27. But this was yesterday in Europe - it's already Sept 28 - and I saw my luggage in front of the airplane. So I'd suppose it's in Frankfurt already. But what could she do? Nothing but doing the awful paperwork. And "No Mr Dietrich, we don't call international numbers". Thank you, United. Next time I'll try to get a contract for a US land line in advance. They can't even tell you which plane will bring your luggage. It may be tomorrow with UA flight arriving around 4pm in SFO. I'm looking forward to some hours in the wonderful United Airlines call center waiting line. Last time I did spend 60-90 minutes every day until I got my luggage. If it takes again that long then OOW will be over by then. I love airline travel - and especially with United Airlines. And by the way ... they gave us these nice fancy packages during the flight:  That looks good - what's in that box??? Yes, really ... a bag of potato chips. Pure fat - very healthy.  I doubt that I'll ever fly United Airlines again!!!

    Read the article

  • Resizing mysterious partition written by DDing an ISO file

    - by Jon
    I downloaded clonezilla and then wrote it to a USB flash drive with this: dd if=clonezilla.iso of=/dev/sdb I've confirmed that the system boots and clonezilla runs from the flash drive. I want to store a clonezilla backup on the same flash drive clonezilla is running on, but I tried it and ran out of space, so I started looking at how to resize the mysterious partition type that was generated from the ISO. fdisk -l /dev/sdb .... Device Boot Start End Blocks Id System /dev/sdb1 * 1 111 113664 17 Hidden HPFS/NTFS .... I've tried using ntfsresize from the Debian ntfsprogs package. I'm trying gparted next, but thought I'd ask here if anyone knows a neat way to resize a partition created on flash from a liveCD image. Thanks in advance Jon ps. Assume Debian 6 please.

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >