Search Results

Search found 55552 results on 2223 pages for 'system variable'.

Page 745/2223 | < Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >

  • nginx configuration for URL URI paths

    - by hachiari
    I want to switch my webserver from apache to nginx however I have difficulties in converting my current htaccess to nginx configuration the conditions that I need: I want everything to be like apache, it can read file such as js, css, jpg, png ,etc I am currently using CodeIgniter PHP frameword, it uses the URI system thingy... So my htaccess configuration for CodeIgniter URI is: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php/$1 [L] RewriteCond %{HTTP_HOST} ^www.domain.tld [NC] RewriteRule ^(.*)$ hxtp://domain.tld/$1 [L,R=301] I am also using minify to compress my css and js files, so the way I call my css and js is like: hxtp://domain.tld/?=css hxtp://domain.tld/?=js I tried some configurations from the net, but I could only solve problem no 2 Thank You

    Read the article

  • cloud computing ? Eucalyptus

    - by neolix
    Hi Greeting!! I want to setup small cloud computing using our old 2 core server system? we are new to cloud system we have google for the same. We are looking host VM's on top any one has done pls share me doc or how to ? we have 50 plus server which we are not using. 2 core each 4GB RAM, 1TB HDD centos is my base os we looking host windows. Right now we can use this server only paravirtualization ignore my english Thanks

    Read the article

  • Sourcing local .bashrc .vimrc without copy to remote machine

    - by David Strejc
    Does anyone have an idea or hack on how to source my local dotfiles (I will probably need more of them so this solution should work with many files) on remote machines without scp them to remote machine? Is something like scp .bashrc to /tmp folder on remote machine and then exporting BASHRC env variable the best solution? I need this because of our company policy and fast cloud servers deployment and redeployment and I don't want to touch .bashrc files on remote machine so my colleges are able to use their default env which doesn't suit me.

    Read the article

  • Extract attachments from Mbox throw MIME

    - by Simeon
    I am a littlebit frustrated, im working on a project with the aim to build a system witch print automatically e-mail attachments of incoming mails ("E-Mail to Print"-system). I already set up a e-mail server (exim4) which receive perfectly e-mail and stores them to a mbox in /var/mail/ - now I want to extract the attachments out of the mbox file throw MIME to the original .PDF, .DOC, .JPG, .GIF, ... and save them in a directory, from where they get print. After the e-mail attachments got extracted they should be deleted, so they don't get extracted again. But how can I get this to work? I am not a coder, so I looked for existing scripts and programs but found nothing to work with. Could anyone give me little help - I would be very thankful! Thanks, Simeon

    Read the article

  • ruby on rails gitorious setup on ubuntu

    - by dogmatic69
    Ive been trying to install gitorious for a while now which required ruby and rails etc. Ive finally got rails pages serving but cant finish the installation of gitorious because the gem version is too new. the error logs say please run 'rake ultrasphinx:configure' and that gives rake ultrasphinx:configure (in /var/www/apps/gitorious) rake aborted! uninitialized constant ActiveSupport::Dependencies::Mutex /var/www/apps/gitorious/Rakefile:10:in `require' (See full trace by running task with --trace) From searching google this is beacuse of the wrong gem verison. Cant find a way to down grade it. apparently sudo gem update --system 1.4.2 should do the trick but ubuntu 10.10 does not like this. ERROR: While executing gem ... (RuntimeError) gem update --system is disabled on Debian, because it will overwrite the content of the rubygems Debian package, and might break your Debian system in subtle ways. The Debian-supported way to update rubygems is through apt-get, using Debian official repositories. If you really know what you are doing, you can still update rubygems by setting the REALLY_GEM_UPDATE_SYSTEM environment variable, but please remember that this is completely unsupported by Debian. So i added export REALLY_GEM_UPDATE_SYSTEM=1 to .bashrc and reloaded it with . ~/.bashrc and still the same. ive tried various forms of setting this environmental variable with no luck. Ive also been told on #gitorious irc channel to add the file config/initializers/rubygems.rb with the line require "thread" to it. This has done nothing. EDIT: Just found another way which was rvm install rubygems 1.4.2 and it gave: Removing old Rubygems files... Installing rubygems dedicated to ruby-1.8.7-p334... Retrieving rubygems-1.4.2 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 288k 100 288k 0 0 282k 0 0:00:01 0:00:01 --:--:-- 414k Extracting rubygems-1.4.2 ... Installing rubygems for /home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby ERROR: Error running 'GEM_PATH="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global:/home/ubuntu/.rvm/gems/ruby-1.8.7-p334@global" GEM_HOME="/home/ubuntu/.rvm/gems/ruby-1.8.7-p334" "/home/ubuntu/.rvm/rubies/ruby-1.8.7-p334/bin/ruby" "/home/ubuntu/.rvm/src/rubygems-1.4.2/setup.rb"', please read /home/ubuntu/.rvm/log/ruby-1.8.7-p334/rubygems.install.log WARN: Installation of rubygems did not complete successfully. TL;DR please tell me how to downgrade rubygems on ubuntu 10.10 or upgrade gitorious to work with 1.6.2 gems

    Read the article

  • Why do my Application Compatibility Toolkit Data Collectors fail to write to my ACT Log Share?

    - by Jay Michaud
    I am trying to get the Microsoft Application Compatibility Toolkit 5.6 (version 5.6.7320.0) to work, but I cannot get the Data Collectors to write to the ACT Log Share. The configuration is as follows. Machine: ACT-Server Domain: mydomain.example.com OS: Windows 7 Enterprise 64-bit Edition Windows Firewall configuration: File and Printer Sharing (SMB-In) is enabled for Public, Domain, and Private networks ACT Log Share: ACT Share permissions*: Group/user names Allow permissions --------------------------------------- Everyone Full Control Administrator Full Control Domain Admins Full Control Administrators Full Control ANONYMOUS LOGON Full Control Folder permissions*: Group/user name Allow permissions Apply to ------------------------------------------------- ANONYMOUS LOGON Read, write & execute This folder, subfolders, and files Domain Admins Full control This folder, subfolders, and files Everyone Read, write & execute This folder, subfolders, and files Administrators Full control This folder, subfolders, and files CREATOR OWNER Full control Subfolders and files SYSTEM Full control This folder, subfolders, and files INTERACTIVE Traverse folder / This folder, subfolders, and files execute file, List folder / read data, Read attributes, Read extended attributes, Create files / write data, Create folders / append data, Write attributes, Write extended attributes, Delete subfolders and files, Delete, Read permissions SERVICE (same as INTERACTIVE) BATCH (same as INTERACTIVE) *I am fully aware that these permissions are excessive, but that is beside the point of this question. Some of the clients running the Data Collector are domain members, but some are not. I am working under the assumption that this is a Windows file sharing permission issue or a network access policy issue, but of course, I could be wrong. It is my understanding that the Data Collector runs in the security context of the SYSTEM account, which for domain members appears on the network as MYDOMAIN\machineaccount. It is also my understanding from reading numerous pieces of documentation that setting the ANONYMOUS LOGON permissions as I have above should allow these computer accounts and non-domain-joined computers to access the share. To test connectivity, I set up the Windows XP Mode virtual machine (VM) on ACT-Server. In the VM, I opened a command prompt running as SYSTEM (using the old "at" command trick). I used this command prompt to run explorer.exe. In this Windows Explorer instance, I typed \ACT-Server\ACT into the address bar, and then I was prompted for logon credentials. The goal, though, was not to be prompted. I also used the "net use /delete" command in the command prompt window to delete connections to the ACT-Server\IPC$ share each time my connection attempt failed. I have made sure that the appropriate exceptions are Since ACT-Server is a domain member, the "Network access: Sharing and security model for local accounts" security policy is set to "Classic - local users authenticate as themselves". In spite of this, I still tried enabling the Guest account and adding permissions for it on the share to no effect. What am I missing here? How do I allow anonymous logons to a shared folder as a step toward getting my ACT Data Collectors to deposit their data correctly? Am I even on the right track, or is the issue elsewhere?

    Read the article

  • How to protect an OS X Server from an anautorized physical connection?

    - by GJ
    Hi I have an OS X 10.6 server, which I administer via SSH and VNC (via SSH tunnel). I can't leave it at the login window since then VNC connections are refused. Therefore I currently leave it logged with my user account. Since it doesn't have a monitor attached, it doesn't go into screen saver mode, which means it doesn't require a password to retake control. This means it is very easy for anyone connecting a keyboard/mouse and monitor to take control of the system. The screen saver password protection, which I can't get to activate, unlike the system's login window, is perfectly compatible with VNC connections. How could I prevent such direct access to the server without connecting a monitor and without blocking my ability to connect with VNC? Thanks!

    Read the article

  • In what way does non-"full n-key rollover" hinder fast typists?

    - by Michael Kjörling
    Wikipedia claims (although the latter claim does not cite a source) that: High-end keyboards that provide full n-key rollover typically do so via a PS/2 interface as the USB mode most often used by operating systems has a maximum of only six keys plus modifiers that can be pressed at the same time.[4] This hinders fast typists, ... In what way would the system being able to recognize only six non-modifier keys at once hinder a fast typist? I consider myself a relatively fast typist and I usually press one key, plus modifiers, at once; I can't imagine any real-life situation in which the system only recognizing six non-modifier keys being pressed at once has been a limiting factor in my keyboard usage. (Multi-stroke keyboard shortcuts as used by high-end software like Visual Studio, Emacs and the like are a different matter.) Note that I am not really interested in answers centered around multiplayer computer games; I'm looking for answers that give reasons that would be relevant to typists, somehow supporting the statement made on Wikipedia.

    Read the article

  • Possible capacitor plague - need help identifying

    - by cornjuliox
    I've been having some PC power issues lately, and I think I've tracked it down to a bad power supply. Lately, when I'm on my PC it will often restart without warning, displaying "Hypertransport sync flood error occurred last boot." once POST finishes. I've googled the error, but can't come to a definitive conclusion as to what's causing it. I've seen posts suggesting that it might be a power supply issue, but nothing conclusive. Here's what I've done so far: -I haven't installed anything suspect within the last 3 months. -I do overclock just a tiny bit, so I tried raising the voltages a little. That didn't work so I brought both CPU multiplier and voltages all back to their default settings, but that didn't solve the problem either. The problem still occurs. -AV scanned the whole system, nothing suspect. -I suspected that it might be a bad power supply so I cracked that open and found the following: I think it might be cap plague, but I'm not sure. It looks more like glue TBH. Could someone help me figure out what might be wrong with this PC? EDIT: Sometimes, after these restarts, I noticed that the GPU fan doesn't spin up, and the single rear case fan that just happens to be connected to the same molex Y-cable as the GFX card doesn't spin up either. Anything to that? EDIT 2: I do use the system quite heavily, but I don't know how that will factor into this. I often play Diablo 3 and EVE Online at the same time, frequently alt-tabbing between the two. I also have Firefox open in the background, sometimes with several tabs, and if I feel like it, I'll mute the in-game sound and open foobar2000 for better music. Could it be that I'm just pushing this thing too hard? EDIT 3: I also noticed something odd. Right before I experience these restarts, my monitor would suffer from very faint lines of static moving across the screen. The monitor is still very much useable, but it is very annoying. Following the restart it disappears, and then would gradually re-appear over the next few days, and then restarts again. I find it to be very odd. System specs for good measure: Orion 600 W PSU AMD Athlon II X3 440 (overclocked to 3.14 ghZ, raised the CPU multiplier to x13 from x10) MSI G40-775 motherboard 1 GB inno3D GTX 550 ti 4 GB DDR3 RAM 500 GB Samsung SATA HD

    Read the article

  • Rail's FileStore with Linux Disk Caching or RAMdisk?

    - by Yo Ludke
    I have a Ruby on Rails application that stores it's catched files on the filesystem (Rails file-system cache). I was thinking about changing to memcached Store, but a short test shows it isn't a big difference in speed. From linuxatemyram.com I learned a bit about file caching. On the current machine there would be around 40..45GB RAM left which isn't needed for the application and which can be used to linux-disk-cache this rails file cache store. The disk is a RAID10 system with almost 120MB disk perfomance. How can I tell Linux to use free RAM more deliberately and not to be shy about using it? Do think it's necessary to adjust a sysyctl/.. value here, or would I have performance advantages to put the File Store root diretory on a ramdisk? (Loosing the cache during a reboot wouldn't be a problem)

    Read the article

  • Best RAID setup for multimedia fileserver?

    - by Mr. Schwabe
    I'm building a fileserver for my small office. We do film and multimedia design. Only 3 clients connected. The server is primarily for local access to graphic assets and video files. I'm looking for advice on hardware and software required. Particularly for the RAID. I have the following objectives: A) merged capacity I'd like all other systems to access the data as a single mapped network drive that has an initial capacity of 10 TB. So perhaps 5x 2TB drives (plus mirror drives for redundancy). B) easy way to increase capacity Thinking long term, I'd like to 'easily' add more drives to the array for a potential two or three fold increase in capacity. So theoretically it could get upto a 30 TB raid array consisting of maybe 15x 2 TB drives of capacity (plus mirror drives for redundancy). C) maximum fault tolerance I want at least 1 mirror drive per capacity drive (in laymen's terms). So if I start with 10 TB / 5x 2TB of capacity, I suppose I would need another another 5x 2TB drives to be mirrors. So 10 drives total. But I'd also like potential for even more redundancy; with upto 2 additional mirrors per 'capacity drive' (and to be able to add them to the array anytime with ease). D) easy way to monitor drive health I'd like an intuitive interface for managing the raid and monitoring drive health The other systems accessing this network drive will be running Windows, but also the odd Ubuntu and MacOS system as well. Are these objectives attainable? What type of RAID setup do you recommend? What hardware will be required? Also what OS do you think this system should be running? Does it really matter? I'm no network admin - just a long time Windoze user, without much Linux experience. That said, I'm not opposed to a Linux solution if it's easy enough and more practical than a Windows OS for this server. Or maybe something such as Openfiler. Budget should hit the sweet spot for value and performance (hence my preference to use 2TB drives). The biggest focus is storage; aside from that the system just needs to keep the drives running optimally with perhaps 2 or 3 clients accessing / writing files at any given time. The hardware quote would start with something like 10x 2TB WD Caviar Blacks; about $1900 for the storage + $x for remaining parts. http://ncix.com/products/index.php?sku=42775&vpn=WD2001FASS&manufacture=Western%20Digital%20WD Your advice is appreciated, thanks!

    Read the article

  • connect two computers together via a rs232 serial port

    - by Richard
    Wondering if anyone knows of a solution to connect with telnet/ssh through a rs232 serial port Edit: I am looking for a way to connect to computers together via a serial port. I want to be able to view the file system of a computer through a serial port.. Is this possible? Edit: So I have successfully connected two computers together using r232 serial ports with a null modem. The instructions I have used are located here Now how do I get to the file system of the host computer?? Any ideas?

    Read the article

  • weird postgresql log entries

    - by hyperboreean
    I am trying to figure out why I get some weird entries in my postgresql log after I do a restart: 2010-05-14 11:30:25 EEST LOG: database system was shut down at 2010-05-14 11:30:22 EEST 2010-05-14 11:30:25 EEST LOG: autovacuum launcher started 2010-05-14 11:30:25 EEST LOG: database system is ready to accept connections 2010-05-14 11:30:25 EEST LOG: incomplete startup packet 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress 2010-05-14 11:30:40 EEST LOG: could not receive data from client: Connection reset by peer 2010-05-14 11:30:40 EEST LOG: unexpected EOF on client connection First, there's the 2010-05-14 11:30:25 EEST LOG: incomplete startup packet which bugs me. Anyone has any idea why this happens? And also, this one is very strange: 2010-05-14 11:30:40 EEST WARNING: there is already a transaction in progress ...

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • Notepad++: Adding color highlighting for PHP functions?

    - by Jebego
    I've recently begun using Notepad++, and have found a part of its styling functionality that confuses me. I'm currently attempting to color all of PHP's defined functions (such as count(), strlen(), etc.). In the Settings-Style Configurator, you cannot add a new style for such a function list. Instead, I have begun editing the stylers.xml and langs.xml. To add the new coloring, in langs.xml, I've modified the php section to the following: <Language name="php" ext="php php3 phtml" commentLine="//" commentStart="/*" commentEnd="*/"> <Keywords name="instre1">[default keywords]</Keywords> <Keywords name="instre2">[my function list]</Keywords> </Language> The [default keywords] and [my function list] are replaced with wordlists. I've also edited the php section in stylers.xml to look like the following: <LexerType name="php" desc="php" ext=""> <WordsStyle name="QUESTION MARK" styleID="18" fgColor="FF0000" bgColor="FDF8E3" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="DEFAULT" styleID="118" fgColor="000000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="STRING" styleID="119" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="STRING VARIABLE" styleID="126" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" /> <WordsStyle name="SIMPLESTRING" styleID="120" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="WORD" styleID="121" fgColor="008040" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" keywordClass="instre1">True False</WordsStyle> <WordsStyle name="NUMBER" styleID="122" fgColor="FF0000" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="VARIABLE" styleID="123" fgColor="0080FF" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="COMMENT" styleID="124" fgColor="FF8040" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="COMMENTLINE" styleID="125" fgColor="FF8040" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="OPERATOR" styleID="127" fgColor="8000FF" bgColor="FEFCF5" fontName="" fontStyle="0" fontSize="" /> <WordsStyle name="FUNCTIONS" styleID="128" fgColor="000080" bgColor="FEFCF5" fontName="" fontStyle="1" fontSize="" keywordClass="instre2"></WordsStyle> </LexerType> The changed part is the last "FUNCTIONS" line. When I restart Notepad++ and go into the Settings-Style Configurator section, under the php language, the FUNCTIONS style exists. I can change the style's color, and can see the entire keyword list under 'Default Keywords'. However, it is not changing the coloring of the words in my code. When I edit the WORD style, which contains stuff like 'if', 'and', and 'true', things change accordingly in my code. Any ideas on how to make this work?

    Read the article

  • Hyper-V Ubuntu 10.04, Filesystem suddenly becomes Read-Only?

    - by Daniel Upton
    We are running a Ubuntu 10.04 VM on a Hyper-V system, The VM is dedicated to running one of our web applications. We have enabled the Hyper-V drivers in /etc/initramfs-tools/modules like so: hv_vmbus hv_storvsc hv_blkvsc hv_netvsc And updated the kernel image like so: $ update-initramfs -u And all was good... until.. This morning i got a support request that our web application was throwing an error 500, so i checked the logs and nothing was there. Then I remembered that I had seen this on another of our ubuntu servers so I... $ touch foo.txt And my suspicions were confirmed: touch: cannot touch `foo.txt': Read-only file system Why is the filesystem randomly becoming readonly? Is this only in Ubuntu on HV? Is it a problem on RedHat or Cent?

    Read the article

  • How to change the HUDSON owner on ubuntu:

    - by Anil
    I am working with tomcat6 and HUDSON, when I run the hudson job it is running as tomcat6 user, what I want to know is there any way to change the HUDSON user as my system login user instead of tomcat6 so that I can run hudson job as my system user. I just want to know whether it is possible or not and why and how? I tired editing /etc/init.d/tomcat6 and changed tomcat6 user and grop as my login id, and the restarted the tomcat. Still the hudson jobs are running as tomcat6 user.Am i did the righ thing, if so why it is not working. Thanks in advance

    Read the article

  • multiple file systems for mysql

    - by RainDoctor
    Does mysql support multiple file systems for a single database with most of the tables being on MyISAM? Context: we have a 1.5TB mysql database, which is increasing at the rate of 200GB per month. The storage is directly attached, whose slots are almost full. I can add another DAS, and increase the file system. But resizing volume, resizing file system, etc are getting messy. Is there a concept of "tablespace, datafile" (like in oracle) in MySql world? Or how you guys manage mysql db with these kind of constraints?

    Read the article

  • Resizing mysterious partition written by DDing an ISO file

    - by Jon
    I downloaded clonezilla and then wrote it to a USB flash drive with this: dd if=clonezilla.iso of=/dev/sdb I've confirmed that the system boots and clonezilla runs from the flash drive. I want to store a clonezilla backup on the same flash drive clonezilla is running on, but I tried it and ran out of space, so I started looking at how to resize the mysterious partition type that was generated from the ISO. fdisk -l /dev/sdb .... Device Boot Start End Blocks Id System /dev/sdb1 * 1 111 113664 17 Hidden HPFS/NTFS .... I've tried using ntfsresize from the Debian ntfsprogs package. I'm trying gparted next, but thought I'd ask here if anyone knows a neat way to resize a partition created on flash from a liveCD image. Thanks in advance Jon ps. Assume Debian 6 please.

    Read the article

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Where lies the soul of a computer?

    - by Christian
    When you take the system drive and put it in a new box, do you rename it or do you keep the name? And when you put a fresh drive in the old box, do you give it a new name? What is with upgrading? How many of the components do you have to change until a computer loses its identity? So a CPU is often described as the heart or the brain of a computer but where lies its soul? What determines its identity? The data on the system drive? The majority of its components? This might sound like a not-so-serious question and it probably is but whom of you didn't already face this problem?

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

  • How to protect an OS X Server from an unauthorized physical connection?

    - by GJ
    Hi I have an OS X 10.6 server, which I administer via SSH and VNC (via SSH tunnel). I can't leave it at the login window since then VNC connections are refused. Therefore I currently leave it logged with my user account. Since it doesn't have a monitor attached, it doesn't go into screen saver mode, which means it doesn't require a password to retake control. This means it is very easy for anyone connecting a keyboard/mouse and monitor to take control of the system. The screen saver password protection, which I can't get to activate, unlike the system's login window, is perfectly compatible with VNC connections. How could I prevent such direct access to the server without connecting a monitor and without blocking my ability to connect with VNC? Thanks!

    Read the article

  • Ubuntu: disable udev's persistent-net-generator.rules

    - by Luke404
    I'm using Ubuntu 12.04 LTS server edition and I am modifying /etc/udev/rules.d/70-persistent-net.rules to define my own mappings of ethernet interfaces to MAC addresses; that file is initially generated by rules in /lib/udev/rules.d/75-persistent-net-generator.rules at system installation time (or at the first boot, I actually don't know and it doesn't matter here). How can I be sure that my edited version will never ever be overwritten by anything? Removing the persistent-net-generator, as suggested on some websites, is not the Right Thing™ to do as told by comments in the file itself: it will be overwritten by any update of the udev package. I'm looking for a more formally correct way to disable it. Is it enough to just make sure that /etc/udev/rules.d/70-persistent-net.rules does exist? Maybe there are other events that could trigger its regeneration? (eg. adding or removing ethernet interfaces to the system?)

    Read the article

< Previous Page | 741 742 743 744 745 746 747 748 749 750 751 752  | Next Page >