Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 641/1523 | < Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • VNC on Xen failure

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Ubuntu: Unsupported Sources

    - by Scott
    What kind of software is available in that source tree ? Also what is the best way to see what kind of software is available in a repository ? I have always reluctant to enable it because I didn't want to wind up with an unstable system.

    Read the article

  • lighttpd: weird behavior on multiple rewrite rule matches

    - by netmikey
    I have a 20-rewrite.conf for my php application looking like this: $HTTP["host"] =~ "www.mydomain.com" { url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/my_app.php" ) } I want to be able to put the webserver in kind of a "maintenance" mode while I update my application from scm. To do this, my idea was to enable an additional rewrite configuration file before this one. The 16-rewrite-maintenance.conf file looks like this: url.rewrite-once += ( "^/(img|css)/.*" => "$0", ".*" => "/maintenance_app.php" ) Now, on the maintenance page, I have a logo that doesn't get loaded. I get a 404 error. Lighttpd debug says the following: 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.300) -- splitting Request-URI 2012-12-13 20:28:06: (response.c.301) Request-URI : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.302) URI-scheme : http 2012-12-13 20:28:06: (response.c.303) URI-authority: localhost 2012-12-13 20:28:06: (response.c.304) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.305) URI-query : 2012-12-13 20:28:06: (response.c.349) -- sanatising URI 2012-12-13 20:28:06: (response.c.350) URI-path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (mod_access.c.135) -- mod_access_uri_handler called 2012-12-13 20:28:06: (response.c.470) -- before doc_root 2012-12-13 20:28:06: (response.c.471) Doc-Root : /www 2012-12-13 20:28:06: (response.c.472) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.473) Path : 2012-12-13 20:28:06: (response.c.521) -- after doc_root 2012-12-13 20:28:06: (response.c.522) Doc-Root : /www 2012-12-13 20:28:06: (response.c.523) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.524) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.541) -- logical -> physical 2012-12-13 20:28:06: (response.c.542) Doc-Root : /www 2012-12-13 20:28:06: (response.c.543) Rel-Path : /img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.544) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.561) -- handling physical path 2012-12-13 20:28:06: (response.c.562) Path : /www/img/content/logo.png, /img/content/logo.png 2012-12-13 20:28:06: (response.c.618) -- file not found 2012-12-13 20:28:06: (response.c.619) Path : /www/img/content/logo.png, /img/content/logo.png Any clue on why lighttpd matches both rules (from my application rewrite config and from my maintenance rewrite config) and concatenates them with a comma - that doesn't seem to make any sense?! Shouldn't it stop after the first match with rewrite-once?

    Read the article

  • Force logout a user

    - by Mithun
    I When I logged into the machine as root and typed who to see which users are logged in, I found somebody else too logged in as root devuser pts/0 2011-11-18 09:55 (xxx.xxx.xxx.xxx) root pts/1 2011-11-18 09:56 (xxx.xxx.xxx.xxx) testuser pts/2 2011-11-18 14:54 (xxx.xxx.xxx.xxx) root pts/3 2011-11-18 14:55 (xxx.xxx.xxx.xxx) How can I force a root user at pts/3 to logout?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Developer Day @ OOP 2001with SOA Specialized Partners

    - by Jürgen Kress
    Oracle SOA Specialized Partners like Opitz Consulting participate in our key marketing events. Therefore make sure that you start your journey to SOA Specialization! ORACLE Developer Day auf der OOP: Entdecken Sie die Einsatzmöglichkeiten und Leistungsfähigkeit der Java-Technologie! incl. Live Hacking mit Special Guest: JAVA Guru Adam Bien! Enterprise-Anwendungen leicht gemacht! Beschleunigen Sie Ihre Entwicklung mit Java. Kommen Sie zum kostenlosen Ganztages-Workshop von ORACLE auf der OOP und lernen Sie die Leistungsfähigkeit von Java kennen. Erfahren Sie mehr über die Java Strategie und die Produktroadmap, welche Einsatzmöglichkeiten Java SE für Embedded erschließt und wie sich eine SOA und BPM-Lösung auf der Basis von Java realisieren lässt. Die vielfältigen Verbesserungen von Java EE6 erleichtern den Entwicklern das Leben erheblich. Kennen Sie bereits das Potential von Java EE6? Adam Bien wird Sie mit einem Live-Hacking von den Stühlen reißen. Torsten Winterberg, Oracle Fusion Middleware ACE Director und Danilo Schmiedel stellen vor wie Java Entwickler die Oracle SOA & BPM Lösungen einbinden können. Am Nachmittag können Sie dann in einer Hands-On Session mit Ihrem eigenen Laptop Java Persistence API, Java Beans, CDI und weitere Technologien ausprobieren. In diesem kostenlosen Workshop von Oracle können Sie sich mit Gleichgesinnten austauschen, sich die neueste Technik direkt von den Oracle Experten zeigen lassen und an praktischen Programmierübungen teilnehmen. Auf dieser Veranstaltung sind Sie richtig, wenn Sie mehr über den aktuellen Status der Java Roadmap wissen wollen, mehr über Java Technologie- und Lösungen (Java SE, ME, etc) erfahren wollen, die Plattform Java EE erproben, die Vorteile der Java EE 6 für Ihre Arbeit verstehen möchten, wenn Sie auf eine Enterprise-Landschaft hochskalieren wollen, mit Java Server Faces Front-Ends erstellen, neue Entwicklungsprojekte planen oder gerade in Angriff annehmen. Registrieren Sie sich jetzt!   ICM - Internationales Congress Center München Am Messesee, Trudering-Riem 81829 München 27. Januar 2011 9.00 Uhr - 16.30 Uhr For more information on SOA Specialization and the SOA Partner Community please feel free to register at www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Wiki Website Technorati Tags: OOP,Adam Bien,Torsten Winterberg,Opitz Consulting,Oracle,SOA,SOA Specialization,OPN

    Read the article

  • How to limit ram usage of a certain binary?

    - by marc.riera
    Hello, i have a binary, which indexes some stuff, it eats all my ram and my swap. Then the server hangs. I would like to limit its ram usage. I've looking at cpulimit and /etc/security/limits.conf but both of them focus on cpu limits and user/processes . Have somebody limited the usage of a certain binary? How can I approach this issue? Thanks

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • Configuration Help for Sendmail Required

    - by Vinayak Mahadevan
    Hi I need some help with respect to sendmail configuration. The basic problem is that I have some employees working from other places and they need access to their mail. So what I have done right now is whatever mails which are meant for them which are generated from within the company and collected by my internal mail server is bounced to an external mail server from where the employees access it. This is done through a email id on a different domain. This was working fine till I restricted the external mailing access for certain users using rulesets in sendmail.cf. Once I had put that in place only people who had external mailing rights could send mails to people outside the office. What I would like to know is that is there anyway where I can expose sendmail on two different ips and thereby configure everybody's email id to point to the same internal mail server using 2 different ips. one ip when inside the company and one ip outside the company. Is it possible that I have one static ip configured for both internal access and external access or is there any otherway it can be done with sendmail. Can anybody help me Sorry for the long post Regards Vinayak

    Read the article

  • Find a process by name and kill it

    - by Fabiano PS
    So, I want to send a kill to a process, I know it's name ps -ef | grep '_rails master' root 2388 1 0 19:46 ? 00:00:04 unicorn_rails master -c /web/hero/config/unicorn.rb -E production -D root 2582 2172 0 20:28 pts/0 00:00:00 grep --color=auto _rails master It is unicorn_rails master [..] how do I kill it? I tried so far: sed and expr. But cant pass it as param to kill

    Read the article

  • how to diagnosis and resolve: /usr/lib64/libz.so.1: no version information available

    - by matchew
    I had a hell of a time installing lxml for python2.7 on centOs5.6. For some background, python2.7 is an alternative installation of python on centOS5.6 which comes with python2.4 installed. it was bulit from source per its instrucitons ./configure make make altinstall However, after about 20 hours of trying I managed to find a workable solution and was able to install lxml. Until, I notice the following error at the top of the interpreter: python2.7: /usr/lib64/libz.so.1: no version information available (required by python2.7) Python 2.7.2 (default, Jun 30 2011, 18:55:26) [GCC 4.1.2 20080704 (Red Hat 4.1.2-50)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> print 'Sheeeeut!' this error is printed out everytime I run a script. For example: $ ./test.py /usr/local/bin/python2.7: /usr/lib64/libz.so.1: no version information available (required by /usr/local/bin/python2.7) the script runs flawlessly, but this error is bothersome. After some digging I have seem to believe I have a wrong version of libz installed, that it is either an older version or built for a different platform. I'm not quite sure how, I've only installed libz through yum, as far as I know. Although, I can't quite remember every little thing I tried in my twenty hours of trying. You may also be intereted in what my lib64 folder looks like, here is some information $ ls -ltrh libz* -rwxr-xr-x 1 root root 84K Jan 9 2007 libz.so.1.2.3 -rwxr-xr-x 1 root root 107K Jan 9 2007 libz.a -rwxr-xr-x 1 root root 154K Feb 22 23:30 libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Apr 20 20:46 libz.so.1 -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jun 30 18:43 libzdb.so.7 -> libzdb.so.7.0.2 lrwxrwxrwx 1 root root 13 Jul 1 11:35 libz.so -> libz.so.1.2.3 lrwxrwxrwx 1 root root 15 Jul 1 11:35 libzdb.so -> libzdb.so.7.0.2 notice: the items that Say Jul 1st or Jun 30th are from me. I had initially moved these files into a backup folder as they seeemed to be 1. duplicates and 2. had a date after/during my problems I alluded to earlier that I had with lxml One inclination is to completely remove python2.7 and re-install. I think having it install to /usr/local/ was a poor default choice. However, without the make uninstall option being present it seems to be a time consuming task for a solution I am not quite sure would solve my problem.

    Read the article

  • Analytics in an Omni-Channel World

    - by David Dorf
    Retail has been around ever since mankind started bartering.  The earliest transactions were very specific to the individuals buying and selling, then someone had the bright idea to open a store.  Those transactions were a little more generic, but the store owner still knew his customers and what they wanted.  As the chains rolled out, customer intimacy was sacrificed for scale, and retailers began to rely on segments and clusters.  But thanks to the widespread availability of data and the technology to convert said data into information, retailers are getting back to details. The retail industry is following a maturity model for analytics that is has progressed through five stages, each delivering more value than the previous. Store Analytics Brick-and-mortar retailers (and pure-play catalogers as well) that collect anonymous basket-level data are able to get some sense of demand to help with allocation decisions.  Promotions and foot-traffic can be measured to understand marketing effectiveness and perhaps focus groups can help test ideas.  But decisions are influenced by the majority, using faceless customer segments and aggregated industry data points.  Loyalty programs help a little, but in many cases the cost outweighs the benefits. Web Analytics The Web made it much easier to collect data on specific, yet still anonymous consumers using cookies to track visits. Clickstreams and product searches are analyzed to understand the purchase journey, gauge demand, and better understand up-selling opportunities.  Personalization begins to allow retailers target market consumers with recommendations. Cross-Channel Analytics This phase is a minor one, but where most retailers probably sit today.  They are able to use information from one channel to bolster activities in another. However, there are technical challenges combining data silos so its not an easy task.  But for those retailers that are able to perform analytics on both sources of data, the pay-off is pretty nice.  Revenue per customer begins to go up as customers have a better brand experience. Mobile & Social Analytics Big data technologies are enabling a 360-degree view of the customer by incorporating psychographic data from social sites alongside traditional demographic data.  Retailers can track individual preferences, opinions, hobbies, etc. in order to understand a consumer's motivations.  Using mobile devices, consumers can interact with brands anywhere, anytime, accessing deep product information and reviews.  Mobile, combined with a loyalty program, presents an opportunity to put shopping into geographic context, understanding paths to the store, patterns within the store, and be an always-on advertising conduit. Omni-Channel Analytics All this data along with the proper technology represents a new paradigm in which the clock is turned back and retail becomes very personal once again.  Rich, individualized data better illuminates demand, allows for highly localized assortments, and helps tailor up-selling.  Interactions with all channels help build an accurate profile of each consumer, and allows retailers to tailor the retail experience to meet the heightened expectations of today's sophisticated shopper.  And of course this culminates in greater customer satisfaction and business profitability.

    Read the article

  • Start multiple instances of Firefox; Xephyr rootless mode

    - by Vi
    How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable. The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.

    Read the article

  • View Link inConsistency

    - by Abhishek Dwivedi
    What is View Link Consistency? When multiple instances (say VO1, VO2, VO3 etc) of an EO-based VO are based on the same underlying EO, a new row created in one of these VO instances (say VO1)can be automatically added (without re-query) to the row sets of the others (VO2, VO3 etc ). This capability is known as the view link consistency. This feature works for any VO for which it is enabled, regardless of whether they are involved in a view link or not. What causes View Link inConsistency? Unless jbo.viewlink.consistent  is disabled for this VO (or globally), or setAssociationConsistent(false) is applied, any of the following can cause View Link inConsistency.  1. setWhereClause 2. Unreferenced secondary EO 3. findByViewCriteria() 4. Using view link accessor row set Why does this happen - View Link inConsistency? Well, there can be one of the following reasons. a. In case of 1 & 2, the view link consistency flag is disabled on that view object. b. As far as 3 is concerned, findByViewCriteria is used to retrieve a new row set to process programmatically without changing the contents of the default row set. In this case, unlike previous cases, the view link consistency flag is not disabled, meaning that the changes in the default row set would be reflected in the new row set.  However, the opposite doesn't hold true. For instance, if a row is deleted from this new row set, the corresponding row in the default row set does not get deleted. In one of my features, which involved deletion of row(s), I resolved the view link inconsistency issue by replacing findByViewCriteria by applyViewCriteria. b. For 4, it's similar to 3 - whenever a view link accessor row set is retrieved, a new row set is created. Now, creating new row set does not mean re-executing the query each time, only creating a new instance of a RowSet object with its default iterator reset to the "slot" before the first row. Also, please note that this new row set always originates from an internally created view object instance, not one you that added to the data model. This internal view object instance is created as needed and added with a system-defined name to the root application module. Anyway, the very reason a distinct, internally-created view object instance is used is to guarantee that it remains unaffected by developer-related changes to their own view objects instances in the data model.

    Read the article

  • GRUB error: unknown filesystem

    - by Ali
    I replaced my old laptop drive which was win7 and ubuntu dual boot with an SSD. Now I connected the old drive through a USB adapter and I want to boot from it. But this comes up: unknown filesystem grub rescue> As i need the programs from old drive I have to boot from it time to time and I don't want to install those software on the new drive. It takes so time to exchange the drives so I want to boot from USB. how can I fix this?

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • ODI y Las funciones GROUP BY, SUM, etc

    - by Edmundo Carmona
    Las bondades de ODI Pase un buen rato buscando la forma de usar la función SUM en ODI, encontré que se puede modificar el KM para agregar la función "GROUP by" y agregar una función jython en el atributo destino, pero esa solución es muy "DURA" ya que si agregamos en el futuro un nuevo atributo, tendríamos que cambiar nuevamente el KM.  Pues bien la solución es bastante más fácil, resulta que podemos agregar la función SUM, MIN, MAX, etcétera a cualquier atributo numérico y ODI automáticamente agregará la función GROUP by con el resto de los atributos. Por ejemplo. La tabla destino tiene los siguientes atributos y asignaciones (mapeos en spanglish): T1.Att1 = T2.Att1 T1.Att2 = T2.Att2 T1.Att3 = SUM(T2.Att3)  ODI crea este Quey: Select T2.Att1, T2.Att2, SUM(Att3) from Table2 T2 group by T2.Att1, T2.Att2 Listo Nada más sencillo.

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • Virtual machine on ubuntu

    - by MITHIYA MOIZ
    I have configured virtual machine on ubuntu with the help of below article, https://help.ubuntu.com/9.04/serverguide/C/libvirt.html I managed to finish all the part except the major portion getting virtual host to talk to real network, Which I guess should be done only via bridge interface. Via virtual machine manager I try to choose any interface it gives me interface not bridged When I try to bridge the interceface eth0 as below auto br0 iface br0 inet static address 192.168.0.223 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off I cannot communicate with this interface to network, host server looses all the communication to network. But when I remote bridge interface from /etc/network/interfaces And configure eth0 as below it works fine The primary network interface auto eth0 iface eth0 inet static address 192.168.0.223 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 dns-nameservers 62.215.6.51 gateway 192.168.0.1 how can i setup bridge interface correctly and how would my /etc/netwrok/interfaces file would look a like.

    Read the article

< Previous Page | 637 638 639 640 641 642 643 644 645 646 647 648  | Next Page >