Search Results

Search found 24334 results on 974 pages for 'directory loop'.

Page 664/974 | < Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • FAVICON: Favicon now showing up

    - by Russ
    I'm using a .png favicon file and it is not showing up on my site. Doing a grep, I see the following in home.htm which looks right for me(I have also confirmed it's in the HEAD section within home.htm): home.htm: <link rel="shortcut icon" type="image/png" href="favicon.png"> The favicon.png file is in the same directory as the home.html file. Any suggestions are welcome! Thanks all. In case the file info is revealing for anyone, I'll attach it here:

    Read the article

  • persistant data in tor browser bundle?

    - by Snesticle
    What sort of persistent data is generated by bundled Tor? I recently did an experiment using the Tor Browser Bundle for GNU-Linux. I created two directories, A and B, and placed an identical copy of Tor in each one. Next I placed a simple python script in directory A that both launched the vidalia package and, when exiting the network, deleted the entire contents of A with the exception of itself and rebuilt the bundle from the original archive. What surprises me is that after about ten hours of browsing each, A and B now show a distinct difference in startup time. Also curious is that I get a message in the log of B that never shows up in A: new control connection open which is a notice level advisory. This has nothing to do with what I was originally testing but now I'm interested in what exactly is going on. By the way I do not have to rely on Tor for my personal safety as many are forced to do so even if you just have a hunch I'd be interested in hearing it.

    Read the article

  • Smooth animation when using fixed time step

    - by sythical
    I'm trying to implement the game loop where the physics is independent from rendering but my animation isn't as smooth as I would like it to be and it seems to periodically jump. Here is my code: // alpha is used for interpolation double alpha = 0, counter_old_time = 0; double accumulator = 0, delta_time = 0, current_time = 0, previous_time = 0; unsigned frame_counter = 0, current_fps = 0; const unsigned physics_rate = 40, max_step_count = 5; const double step_duration = 1.0 / 40.0, accumulator_max = step_duration * 5; // information about the circ;e (position and velocity) int old_pos_x = 100, new_pos_x = 100, render_pos_x = 100, velocity_x = 60; previous_time = al_get_time(); while(true) { current_time = al_get_time(); delta_time = current_time - previous_time; previous_time = current_time; accumulator += delta_time; if(accumulator > accumulator_max) { accumulator = accumulator_max; } while(accumulator >= step_duration) { if(new_pos_x > 1330) velocity_x = -15; else if(new_pos_x < 70) velocity_x = 15; old_pos_x = new_pos_x; new_pos_x += velocity_x; accumulator -= step_duration; } alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; al_clear_to_color(al_map_rgb(20, 20, 40)); // clears the screen al_draw_textf(font, al_map_rgb(255, 255, 255), 20, 20, 0, "current_fps: %i", current_fps); // print fps al_draw_filled_circle(render_pos_x, 400, 15, al_map_rgb(255, 255, 255)); // draw circle // I've added this to test how the program will behave when rendering takes // considerably longer than updating the game. al_rest(0.008); al_flip_display(); // swaps the buffers frame_counter++; if(al_get_time() - counter_old_time >= 1) { current_fps = frame_counter; frame_counter = 0; counter_old_time = al_get_time(); } } I have added a pause during the rendering part because I wanted to see how the code would behave when a lot of rendering is involved. Removing it makes the animation smooth but then I'll have to make sure that I don't let the frame rate drop too much and that doesn't seem like a good solution. I've been trying to fix this for a week and have had no luck so I'd be very grateful if someone can read through my code. Thank you! Edit: I added the following code to work out the actual velocity (pixels per second) of the ball each time the ball is rendered and surprisingly it's not constant so I'm guessing that's the issue. I'm not sure why it's not constant. alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; cout << (render_pos_x - old_render_pos) / delta_time << endl; old_render_pos = render_pos_x;

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • Why am I unable to turn off recursion in ISC BIND?

    - by nbolton
    Here's my named.conf.options file: options { directory "/var/cache/bind"; dnssec-enable yes; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; # disable recursion recursion no; }; I've tried adding allow-recursion { "none"; } before recursion but this also has no effect; I'm testing it by using nslookup on Windows, and using google.com. as the query (and it returns an IP, so I assume recursion is on). This issue occurs on two servers with similar setups.

    Read the article

  • Mirror/Backup from SSH/SFTP to Windows

    - by Andrew Russell
    What I am trying to do is mirror a directory (recursively) from a server I can SSH/SFTP to, to a Windows machine. I want to do this as part of a script, so it can be automated. I only want to copy new or modified files. I don't want to have to download all the files every time the script runs. In other words, I'm trying to get the equivalent of RoboCopy /MIR that will work using SFTP as a source. What would you recommend?

    Read the article

  • missing libjpeg.so.62 from ia32 shared library

    - by user170200
    I am trying to install a chemical/molecular biology modeling program called Molsoft ICM-Pro. Initially after downloading the program and trying to open, it gave me error messages that I was missing shared libraries, and after talking with my network administrator he recommended I install the ia32 shared libraries using sudo apt-get install ia32-libs Which gives sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. so I am assuming the libraries installed correctly, but now when I try to run the program I get this error: ubuntu:/home/reilly/icmd icm icm: error while loading shared libraries: libjpeg.so.62: cannot open shared object file: No such file or directory So my question is, where can I get the library containing libjpeg.so.62? Additionally, I was told I would need libXmu.so.6 and libtiff.so.3 . Is there a shared library that could be missing that would contain these files? I am an ubuntu noob, so sorry if the information I provided was unclear. Any help would be immensely appreciated! btw I am using ubuntu 12.04 dual boot with windows on an HP pavilion dv6

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Perform shell operation through secure shell

    - by Ben
    Is it possible to perform a shell operation from a bash script through a secure shell. Here is an example of why you may want to do this. Lets say you have a simple unix operating system that you need only build and run on, but you want to do all of the development on another machine. I want to write a bash script that has the following functionality: scp file to location on other machine ssh to other machine cd into correct directory make run program scp results to file on original computer exit ssh Is this remotely possible? (Pardon the Pun :p)

    Read the article

  • LDAP Bind request failing

    - by Madhur Ahuja
    I have a Windows Server 2008 R2 Active Directory domain controller with domain madhurmoss.com I have a Linux box which is trying to connect to LDAP (389) on above box, which is failing. Upon inspection in Wireshark, I see a bind request with following query sAMAccountName=Administrator,DC=madhurmoss,DC=com and result with invalid Credentials 80090308: LdapErr: DSID-0C0903A9, comment: AcceptSecurityContext error, data 52e, v1db0 I want it to connect it through Administrator which lies in CN=Administrator,CN=Users,DC=madhurmoss,DC=com The supplied credentials are correct. I believe the query sAMAccountName=Administrator,DC=madhurmoss,DC=com is wrong. Can anyone guide me what could be wrong ?

    Read the article

  • Why won't SSI work in IIS?

    - by Josh Kodroff
    I can't get IIS to respect my SSI directives - it just outputs the #include directive as if it were regular old html. Here's the relevant data points: My file with the include directive is called index.html This is my directive: <!-- #include file = "header.shtml" --> (it doesn't work with virtual either.) The file being requested is in the same directory as the file being #include-ed. The SSI module is installed. The SSINC-shtml handler mapping is present and enabled. I think it might be some sort of permissions issue (read/write/execute), but I don't know where those settings are in IIS 7.5.

    Read the article

  • Why is DAVExplorer not connecting?

    - by C.W.Holeman II
    DAVExplorer is not connecting. Connecting to a WebDAV Server states: Once you have entered a location URL, and (if necessary) your login name and password, DAV Explorer will connect to the remote WebDAV server, and request a listing of the resources there. A hierarchical view of the sub-collections will be displayed Invoke Apache Jackrabbit $ java -jar jackrabbit-standalone-2.0.0.jar --port 8200 Welcome to Apache Jackrabbit! ------------------------------- Using repository directory jackrabbit Writing log messages to jackrabbit/log Starting the server... Apache Jackrabbit is now running at http://localhost:8200/ Use DAVExplorer $ java -jar DAVExplorer.jar Then connect to localhost:8200/repository/default/ which pops up: Login ===== Login name: [admin] Password: [admin] <OK> The pop up closes then nothing changes. Using cadaver confirms Jackrabbit is working: $ cadaver http://localhost:8200/repository/default/ Authentication required for Jackrabbit Webdav Server on server `localhost': Username: admin Password: dav:/repository/default/> ls Listing collection `/repository/default/': succeeded. Coll: com 0 Mar 13 11:07 Coll: it 0 Mar 13 11:07 Coll: net 0 Mar 13 11:07 Coll: org 0 Mar 13 11:07 Coll: za 0 Mar 13 11:07

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • Double VPN Network Authentication

    - by Pyromanci
    I have a project I'm working on and looking for some info. Right now I have a VPN network using Cisco Pix 501's for the vpn clients and a Cisco VPN Concentrator 3000 for the VPN Server. Since the Pix is constantly connected to the vpn, I want to add a extra level of authentication. Meaning when the user on the other end goes to access anything on the VPN they are asked for a username password before the connection is established. I've never done this sort of structure before. So I'm not even sure where to really being or even if my current hardware can do something like this, or if i need to through in some sort of radius/LDAP/Active Directory type server into the mix.

    Read the article

  • Synergy on Mac OS X 10.7

    - by matt
    Unfortunately, I can't seem to get Synergy to work between two Mac clients both using 10.7. One is an older 21" iMac and the other is a new 2011 Macbook Air. Here's my synergy.conf, symlinked to my home directory as ~/.synergy.conf section: screens foo: bar: super = alt alt = super end section: links foo: right = bar bar: left = foo end That is, I thought the super = alt trick was mandatory in order to have alt work on Mac but unfortunately, nothing really works. Both the Command and Control keys do NOT work on bar but work fine on foo given that the keyboard is paired with that screen. The keyboard modifier keys are the same on both computers and the making the mouse go between screens works as well. I was wondering if anyone else had any success or problems running into this issue on 10.7 and was hoping there was a possible fix.

    Read the article

  • Google Drive have a folder/file in both private and public

    - by Sander
    I've my documents in Google Drive/Docs neatly organised in structured folder system. For not cluttering email inboxes purposes I now would like to share the content of 1 single private folder by putting it in the public directory. If possible I'd like to do this without duplicating the content. Is it possible to organise files in Google Drive to that they have 2 tags/folders attached to them - 1 private and 1 public? Is there another way to quickly share/link a folder to my public folder without actually moving it away from my private to my public folder?

    Read the article

  • Issues with touch buttons in XNA (Release state to be precise)

    - by Aditya
    I am trying to make touch buttons in WP8 with all the states (Pressed, Released, Moved), but the TouchLocationState.Released is not working. Here's my code: Class variables: bool touching = false; int touchID; Button tempButton; Button is a separate class with a method to switch states when touched. The Update method contains the following code: TouchCollection touchCollection = TouchPanel.GetState(); if (!touching && touchCollection.Count > 0) { touching = true; foreach (TouchLocation location in touchCollection) { for (int i = 0; i < menuButtons.Count; i++) { touchID = location.Id; // store the ID of current touch Point touchLocation = new Point((int)location.Position.X, (int)location.Position.Y); // create a point Button button = menuButtons[i]; if (GetMenuEntryHitBounds(button).Contains(touchLocation)) // a method which returns a rectangle. { button.SwitchState(true); // change the button state tempButton = button; // store the pressed button for accessing later } } } } else if (touchCollection.Count == 0) // clears the state of all buttons if no touch is detected { touching = false; for (int i = 0; i < menuButtons.Count; i++) { Button button = menuButtons[i]; button.SwitchState(false); } } menuButtons is a list of buttons on the menu. A separate loop (within the Update method) after the touched variable is true if (touching) { TouchLocation location; TouchLocation prevLocation; if (touchCollection.FindById(touchID, out location)) { if (location.TryGetPreviousLocation(out prevLocation)) { Point point = new Point((int)location.Position.X, (int)location.Position.Y); if (prevLocation.State == TouchLocationState.Pressed && location.State == TouchLocationState.Released) { if (GetMenuEntryHitBounds(tempButton).Contains(point)) // Execute the button action. I removed the excess } } } } The code for switching the button state is working fine but the code where I want to trigger the action is not. location.State == TouchLocationState.Released mostly ends up being false. (Even after I release the touch, it has a value of TouchLocationState.Moved) And what is more irritating is that it sometimes works! I am really confused and stuck for days now. Is this the right way? If yes then where am I going wrong? Or is there some other more effective way to do this? PS: I also posted this question on stack overflow then realized this question is more appropriate in gamedev. Sorry if it counts as being redundant.

    Read the article

  • IIS 7 + ASP.NET 4

    - by user26712
    Hello, I have an ASP.NET application that I am trying to convert to an ASP.NET 4 application. The application is fairly simple. I have created a new web application in IIS 7.5 pointing to the directory that the ASP.NET application exists in. When I attempt to execute the application, but entering http://localhost%3A%5Bport%5D into my browser, I receive the following error: Error Summary HTTP Error 500.21 - Internal Server Error Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list Most likely causes: * Managed handler is used; however, ASP.NET is not installed or is not installed completely. * There is a typographical error in the configuration for the handler module list.

    Read the article

  • WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256

    - by user702295
    WARNING Retrying Bulk Insert for file:sqlldr due to Communication Error:256 I am running my engine on Linux and am receiving an intermittent message "WARNING Retrying bulk insert for file: sqlldr due to communication Error: 256" The engine seems to have completed successfully, but it is not clear if this error caused some of the forecast to not complete. It is also not clear what caused the error. Generally if you see only the WARNING of it, it means that next retries of the same load request have eventually succeeded and so the run a a whole is not affected. In order to know more about what happens, look for .log/.bad files left in the engines bin directory or possibly a quote of them within the specific engine log that had the issue.  The sqlnet.log file may also have some information about it and perhaps at the database server side there may be some log/alert regarding what happened.  Look at the alert.log. In general it could be that the database server/network was over loaded at the time and somehow the connection was rejected/failed/aborted either due to specific setting on concurrent connections/sessions or inadvertently due to glitch in network/os/hardware. If this repeats and becomes more frequent during the run you should look further into it as mentioned above. You can also track this using either SQL*Trace or java.util.logging.  - Globally enable logging by setting the oracle.jdbc.Trace system property java -Doracle.jdbc.Trace=true - Client Side Tracing: Your SQLNET.ORA file should contain the following lines to produce a client side trace file: trace_level_client = 10 trace_unique_client = on trace_file_client = sqlnet.trc trace_directory_client = <path_to_trace_dir> Server Side Tracing: To enable server side tracing, use the following parameters: trace_level_server = 10 trace_file_server = server.trc trace_directory_server = <path_to_trace_dir> Tracing Levels: The following values can be used for TRACE_LEVEL* parameters:     16 or SUPPORT — WorldWide Customer Support trace information     10 or ADMIN — Administration trace information     4 or USER — User trace information     0 or OFF — no tracing, the default Additional information is readily available via the web.

    Read the article

  • 500 Error when using custom account for application pool in IIS 7

    - by Brownie
    I have a very simple site with only static files in IIS 7 on Windows Server 2008 SP2. When I try to access any static file I get a 500 error. If I rename an html file to have an aspx extension it works fine. The site also works fine when using the built in identity for the application pool. The problem occurs when I switch to using a custom account for the application pool. I have tried using both local and domain accounts to run the application pool under. I have given full control to these accounts on the website directory and files. Turning on tracing reveals this error message: ModuleName: IIS Web Core Notification: 2 HttpStatus: 500 HttpReason: Internal Server Error HttpSubStatus: 0 ErrorCode: 2147943746 ConfigExceptionInfo Notification: AUTHENTICATE_REQUEST ErrorCode: Either a required impersonation level was not provided, or the provided impersonation level is invalid. (0x80070542) I have not had any luck with googling the error code.

    Read the article

  • Trying to set up OpenVPN server on a vps

    - by Austin
    I'm trying to set up an OpenVPN server on my VPS for myself when I'm in public places, using this tutorial, http://tipupdate.com/how-to-install-openvpn-on-ubuntu-vps/ However whenever I try to start the server, it gives me this, root@vps:~# /etc/init.d/openvpn start * Starting virtual private network daemon(s)... * Autostarting VPN 'server' [fail] The log contains this Tue Dec 11 10:53:32 2012 Diffie-Hellman initialized with 1024 bit key Tue Dec 11 10:53:32 2012 /usr/bin/openssl-vulnkey -q -b 1024 -m <modulus omitted> Tue Dec 11 10:53:33 2012 TLS-Auth MTU parms [ L:1542 D:138 EF:38 EB:0 ET:0 EL:0 ] Tue Dec 11 10:53:33 2012 ROUTE: default_gateway=UNDEF Tue Dec 11 10:53:33 2012 Note: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2) Tue Dec 11 10:53:33 2012 Note: Attempting fallback to kernel 2.2 TUN/TAP interface Tue Dec 11 10:53:33 2012 Cannot allocate TUN/TAP dev dynamically Tue Dec 11 10:53:33 2012 Exiting So obviously it's something to do with the tun, but I don't understand how to fix it. Thanks!

    Read the article

  • How to avoid maximum Workgroup Manager connections in Mac OS X Server 10.6

    - by Stephan
    Is there a limit on the Mac OS X Server (10.6) Workgroup Manager in respect to concurrent connections to a server? I have an OS X server up and running and Open Directory configured but I am not able to log in remotely as I get the message the maximum number of connections for Workgroup Manager is already reached and I should wait for a user to disconnect. Even after a restart I get this message remotely. However, locally on the server I can start Workgroup Manager without any issues. It always lets me connect. Any advice what I need to do to make Workgroup Manager work from a remote location? I could not find any max connection setting in Server-Admin and nothing in the slapd log files. The server license says unlimited so I am quite sure it should not be a regular error message that indicates to me I should upgrade.

    Read the article

  • OVM Server for SPARC Enhancements

    - by Owen Allen
    Oracle VM Servers for SPARC saw a few improvements in Ops Center 12.2. In addition to brownfield support, we've made a number of enhancements to let you add OVM Servers for SPARC to a Server Pool and enable migration of their guests. -When you discover an OVMSS Control Domain and manage it with an Ops Center Agent, its guests are automatically discovered as well. The guest metadata is initially put in the local metadata library in the /guests directory, but you can move it from one library to another to enable migration.-Once you've discovered an OVMSS control domain, you can add it to a server pool, even if it's already configured and running logical domains. Even if live migration between OVMSS systems isn't possible due to CPU incompatibilities, you can still put them in a server pool together and enable guest recovery by configuring the CPU architecture of the guest domain as generic. -You can mark a guest's storage as shared to indicate that it's available to other managed OVM Server systems with the same back-end name. This lets you use storage not fully managed in an Ops Center library as part of guest migration. Put together, these enhancements make it much easier to manage and maintain OVMSS guests.

    Read the article

  • Giving PHP the permission to make a git pull request

    - by Bernd
    Hello, I would like to allow PHP to execute a Git pull command. But there are some problems with the user and permissions. How did you solve the problem? PHP runs as user www-data. Therefore I've changed the .git directory owner/group to www-data (chown www-data:www-data -R .git). As it is turned out later www-data has no SSH keys. Is it a good idea to give it one? If yes where to place? Or is it possible to allow it to use a specific key. Best regards, Bernd

    Read the article

< Previous Page | 660 661 662 663 664 665 666 667 668 669 670 671  | Next Page >