Search Results

Search found 23404 results on 937 pages for 'script compression'.

Page 657/937 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Remote Yum mirror

    - by specto
    I have a bunch of remote computers that must be updated to the most recent packages for RedHat 4 and RedHat 5. I am using mrepo to mirror the RHN packages, however the remote computers do not have an internet connection. Because of this I have to update the mirror server that is part of the remote computers with a dvd. This is to cut down shipping costs to just a dvd. I am attempting to script this so I can fit all of the new packages on a CD or a DVD. I send updates about once or twice a month depending on package requirements. So my question is, is their a good method to do this so that the only things transferred are the new packages? I wish I could just use rsync. Thanks.

    Read the article

  • Changing frontend cache

    - by Utsav
    Our architecture consists of a front-end cache that most read only users obtain their data from directly. The front-end cache sits in front of a farm of webservers that serve pages written in PHP. We need to be able to detect certain conditions at the front-end cache level and pass those values through to the back-end via HTTP headers. For example we would like to manually tag the carrier network based on the IP address. So, for incoming traffic if the user is say coming from an IP address in the range of "41.202.192.0"/19 we would tag them as being a Orange Cameroon user by setting the appropriate HTTP request header, e.g., X-Carrier = "Orange Cameroon". Based on the setting of this header we would like to vary the cache and serve a different banner to the end user. How would you go about doing this? Keep in mind that we don't want to pollute the cache and we also don't want to create too many small cache segments. Assumptions: You can assume that the X-Carrier has already been detected in our cache. So, for the purposes of your test you can just set this value manually in your example script.

    Read the article

  • persistant data in tor browser bundle?

    - by Snesticle
    What sort of persistent data is generated by bundled Tor? I recently did an experiment using the Tor Browser Bundle for GNU-Linux. I created two directories, A and B, and placed an identical copy of Tor in each one. Next I placed a simple python script in directory A that both launched the vidalia package and, when exiting the network, deleted the entire contents of A with the exception of itself and rebuilt the bundle from the original archive. What surprises me is that after about ten hours of browsing each, A and B now show a distinct difference in startup time. Also curious is that I get a message in the log of B that never shows up in A: new control connection open which is a notice level advisory. This has nothing to do with what I was originally testing but now I'm interested in what exactly is going on. By the way I do not have to rely on Tor for my personal safety as many are forced to do so even if you just have a hunch I'd be interested in hearing it.

    Read the article

  • Upstart: best way for shutdown hook?

    - by Binarus
    Hi, since Ubuntu relies on upstart for some time now, I would like to use an upstart job to gracefully shutdown certain applications on system shutdown or reboot. It is essential that the system's shutdown or reboot is stalled until these applications are shut down. The applications will be started manually on occasion, and on system shutdown should automatically be ended by a script (which I already have). Since the applications can't be ended reliably without (nearly all) other services running, ending the applications has to be done before the rest of the shutdown begins. I think I can solve this by an upstart job which will be triggered on shutdown, but I am unsure which events I should use in which manner. So far, I have read the following (partly contradicting) statements: There is no general shutdown event in upstart Use a stanza like "start on starting shutdown" in the job definition Use a stanza like "start on runlevel [06S]" in the job definition Use a stanza like "start on starting runlevel [06S]" in the job definition Use a stanza like "start on stopping runlevel [!06S]" in the job definition From these recommendations, the following questions arise: Is there or is there not a general shutdown event in Ubuntu's upstart? What is the recommended way to implement a "shutdown hook"? When are the events runlevel [x] triggered; is this when having entered the runlevel or when entering the runlevel? Can we use something like "start on starting runlevel [x]" or "start on stopping runlevel [x]"? What would be the best solution for my problem? Thank you very much, Binarus

    Read the article

  • help with Outlook Exchange server and curl

    - by stib
    I work on a mac in a building full of PCs, and the IT department here doesn't have IMAP access turned on on the exchange servers. So I miss a lot of meetings because I don't get reminders because I access my mail via Outlook Web access. I had written a script to scrape my Outlook Web Access calendar and turn it into iCal format, so I could get my reminders via thunderbird or iCal.app. It basically downloaded the calendar page via curl, parsed the HTML and reformatted all the appointments as ical. it wasn't elegant, but it worked. Then they changed to outlook 2007, and it doesn't work any more. I have a sketchy knowledge of curl, and almost zero knowledge of how outlook works. Can anyone point me towards a reference for getting calendar info out of an exchange server without using outlook? If I can configure curl to get the HTML I will be happy, but if there's a more elegant way, such as getting the calendar info as XML I'll be delirious.

    Read the article

  • links for 2011-02-17

    - by Bob Rhubart
    ArchitectACEs - Oracle Wiki Putting a Face on the Architect ACE The Oracle ACE s listed here have identified themselves, or have been identified by fellow ACEs, as software architects. As... (tags: ping.fm) Debra's thoughts on Oracle and User Groups: I did it - I did the Fusion UX Demo Oracle ACE Director Debra Lilley shares her experience in presenting a Fusion Applications demo at RMOUG. (tags: oracle otn oracleace) The Blas from Pas: JRuby Script to Monitor a Oracle WebLogic GridLink Data Source Remotely "In WebLogic 10.3.4 release, a single data source implementation has been introduced to support Oracle RAC cluster. To simplify and consolidate its support for Oracle RAC, WebLogic Server has provided a single data source that is enhanced to support the capabilities of Oracle RAC." (tags: oracle otn weblogic) Show Notes: Bob Hensle on IT Strategies from Oracle (ArchBeat) In Part 1 Bob Hensle talked about the various documents in the IT Strategies from Oracle library. In Part 2 (now available) Bob talks about how SOA and other factors are reflected in those documents. (tags: oracle otn entarch podcast) PODCAST: Examining the state of EA and findings of recent survey | Open Group Blog A transcript of a podcast panel discussion on the findings from a study on the current state and future direction of enterprise architecture from The Open Group Conference, San Diego 2011. (tags: entarch opengroup) A Virtual Dilemma (Antony Reynolds' Blog) SOA author Anthony Reynolds shares a solution. (tags: oracle otn soa) Webcast: Live Online Forum: Oracle Security - February 24, 9:00am PT Speakers: Mary Ann Davidson, Chief Security Officer, Oracle; Tom Kyte, Senior Technical Architect, Oracle; Jeff Margolies, Partner, Security Practice, Accenture; Vipin Samar, VP, Database Security Product Development Oracle; and Nishant Kaushik, Chief Strategist, Identity and Access Management. (tags: oracle security) Obama banks on cloud, consolidation, to hold down IT costs | Computerworld NZ President Obama's fiscal 2012 budget proposal keeps IT spending almost flat compared to fiscal 2010 mostly due to the consolidation of data centers and a shift to cloud computing systems. (tags: ping.fm)

    Read the article

  • Set up proxy for vpn server on ubuntu server 12.4

    - by Morteza Soltanabadiyan
    I have a vpn server with HTTPS, L2TP, OPENVPN, and PPTP. I want to set up a proxy on the server, so all connection that comes from vpn clients, they will use that. I created the following bash script file for it, but the proxy isn't working. gsettings set org.gnome.system.proxy mode 'manual' gsettings set org.gnome.system.proxy.http enabled true gsettings set org.gnome.system.proxy.http host 'cproxy.anadolu.edu.tr' gsettings set org.gnome.system.proxy.http port 8080 gsettings set org.gnome.system.proxy.http authentication-user 'admin' gsettings set org.gnome.system.proxy.http authentication-password 'admin' gsettings set org.gnome.system.proxy use-same-proxy true export http_proxy=http://admin:[email protected]:8080 export https_proxy=http://admin:[email protected]:8080 export HTTP_PROXY=http://admin:[email protected]:8080 export HTTPS_PROXY=http://admin:[email protected]:8080 What to do to make a global proxy for server and all vpn clients to use it automatically?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Fixing mac user file permissions, not the system

    - by Cawas
    Usually those files get wrong permission when coming from the network, even when I copy them from it, but mostly through "file sharing". So, definitely not talking about Disk Utility repair here, please. But regardless of how the file got wrong permission, I know of two bad ways to fix them. One is CMD+I and the other is chown / chmod. The command line isn't all bad but isn't practical either. Some times it's just 1 file I need to repair, sometimes it's a bunch of them. By "repair" I mean 644 for files, 755 for folders, and current user:group for all of them. Isn't there any app / script / automator out there to do that?

    Read the article

  • Can you set CIFS permisions from EMC Command Line?

    - by TJ.
    I am in the process of migrating file shares from my EMC NS-20 to my new VNXe 3100. I am using a RoboCopy script to move the files but am getting errors on some files and folders. I have Domain Admin privileges but when I go to view the security permissions on the folders it says I don't have permissions. I have tried taking ownership to get around the permissions issue but that fails too. So as a last resort can I set permissions on this folder from the EMC console or Web management console?

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • Can PHP be run in Apache via mod_php and mod_fcgi side by side?

    - by Mario Parris
    I have an existing installation of Apache (2.2.10 Windows x86) using mod_php and PHP 5.2.6. Can I run another site in a virtual host using FastCGI and a different version of PHP, while stilling running the main site in mod_php? I've made an attempt, but when I add my FCGI settings to the virtual host container, Apache is unable to restart. httpd.conf mod_php settings: LoadModule php5_module "C:\PHP\php-5.2.17-Win32-VC6-x86\php5apache2_2.dll" AddHandler application/x-httpd-php .php PHPIniDir "C:\PHP\php-5.2.17-Win32-VC6-x86" httpd-vhosts.conf fastcgi settings: <VirtualHost *:80> DocumentRoot "C:/Inetpub/wwwroot/site-b/source/public" ServerName local.siteb.com ServerAlias local.siteb.com SetEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php.ini" FcgidInitialEnv PHPRC "C:\PHP\php-5.3.5-nts-Win32-VC6-x86" FcgidWrapper "C:\PHP\php-5.3.5-nts-Win32-VC6-x86\php-cgi.exe" .php AddHandler fcgid-script .php </VirtualHost> <Directory "C:/Inetpub/wwwroot/site-b/source/public"> Options Indexes FollowSymLinks Includes ExecCGI AllowOverride All Order allow,deny Allow from all </Directory>

    Read the article

  • Could not find rake-10.1.0 in any of the sources

    - by spuder
    I've got a ruby on rails application (gitlab) which is installed via puppet. Everything on the test system runs fine, but production generates an error about rake Running /home/git/gitlab-shell/bin/check Could not find rake-10.1.0 in any of the sources Run bundle install to install missing gems. Here is the full rake check: root@gitlab:/home/git# sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production Checking Environment ... Git configured for git user? ... yes Has python2? ... yes python2 is supported version? ... yes Checking Environment ... Finished Checking GitLab Shell ... GitLab Shell version >= 1.7.1 ? ... OK (1.7.1) Repo base directory exists? ... yes Repo base directory is a symlink? ... no Repo base owned by git:git? ... yes Repo base access is drwxrws---? ... yes update hook up-to-date? ... yes update hooks in repos are links: ... Could not find rake-10.1.0 in any of the sources Run `bundle install` to install missing gems. gitlab-shell self-check failed Try fixing it: Make sure GitLab is running; Check the gitlab-shell configuration file: sudo -u git -H editor /home/git/gitlab-shell/config.yml Please fix the error above and rerun the checks. Checking GitLab Shell ... Finished Checking Sidekiq ... Running? ... yes Number of Sidekiq processes ... 1 Checking Sidekiq ... Finished Checking GitLab ... Database config exists? ... yes Database is SQLite ... no All migrations up? ... yes GitLab config exists? ... yes GitLab config outdated? ... no Log directory writable? ... yes Tmp directory writable? ... yes Init script exists? ... yes Init script up-to-date? ... yes projects have namespace: ... Spencer Owen / bar ... yes Projects have satellites? ... Spencer Owen / bar ... can't create, repository is empty Redis version >= 2.0.0? ... yes Your git bin path is "/usr/bin/git" Git version >= 1.7.10 ? ... yes (1.8.4) Checking GitLab ... Finished The step 'gitlab-shell check' effectively runs the following command. If I run that command manually, everything passes. root@gitlab:/home/git/gitlab# sudo -u git -H /home/git/gitlab-shell/bin/check Check GitLab API access: OK Check directories and files: /home/git/repositories: OK /home/git/.ssh/authorized_keys: OK I have verified that rake is in fact installed root@gitlab:/home/git/gitlab# gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# bundle install root@gitlab:/home/git/gitlab# sudo -u git -H gem install rake -v 10.1.0 root@gitlab:/home/git/gitlab# sudo -u git -H bundle install Ruby is installed with update alternatives root@gitlab:/home/git/gitlab# sudo -u git -H ruby --version ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which ruby` lrwxrwxrwx 1 root root 22 Oct 8 20:26 /usr/bin/ruby -> /etc/alternatives/ruby root@gitlab:/home/git/gitlab# sudo -u git -H gem --version 2.1.10 root@gitlab:/home/git/gitlab# sudo -u git -H ls -l `which gem` lrwxrwxrwx 1 root root 21 Oct 10 20:50 /usr/bin/gem -> /etc/alternatives/gem I've tried the solution mentioned below, to allow shared gems http://stackoverflow.com/questions/19284914/bundle-exec-fails-with-could-not-find-rake-10-1-0-in-any-of-the-sources http://stackoverflow.com/questions/18978002/could-not-find-rake-with-bundle-exec root@gitlab:/home/git/gitlab# cat /home/git/gitlab/.bundle/config --- BUNDLE_FROZEN: '1' BUNDLE_PATH: vendor/bundle BUNDLE_WITHOUT: development:test:postgres BUNDLE_DISABLE_SHARED_GEMS: '1' I've exhausted google, so I'm hoping for someone more familiar with ruby to offer any ideas how to resolve the error. Could not find rake-10.1.0 in any of the sources

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • I cannot install flash player, I am getting 1603 exit code

    - by Naz
    I am trying to install flash player silently, using a powershell script. I do not think it is being installed. I looked under "control panel-uninstall programs" I don't see flash player listed there. Also, I am printing the exit code for the process and it prints 1603 exit code, which is "fatal error during installation" As an experiment, I double click on the flash player .msi file, and it gave me 1722 error " Error 1722.There is a problem with this Windows Installer package. A program run as part of the setup did not finish as expected. Contact your support personnel or package vendor. "

    Read the article

  • Problem upgrading kernel on debian 3.1

    - by exhuma
    Hi, I have a quite old box in a remote server farm. So I have no direct access. Only remote SSH (and via SSH to a serial console). I haven't updated this box in ages. Now, whenever I want to install a new package, a dependency to glibc appears. Unfortunately, the install of glibc depends on a 2.6 kernel and I am running a venerable 2.4 kernel (one more reason to upgrade). The problem is, that the install of a new kernel has an indirect (over locales) dependency to glibc. So, to install glibc, I need a new kernel. For a new kernel, I need to upgrade glibc. Essentially I am blocked. What's the best way to proceed considering I have no "hardware" access? Here's a quick transcript of the upgrade process: [green:~]% sudo aptitude install linux-image-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done The following packages are unused and will be REMOVED: gcc-4.3-base The following NEW packages will be automatically installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 module-init-tools yaird The following packages have been kept back: adduser apache2 apache2-mpm-prefork apache2-utils apache2.2-common apt apt-utils aptitude autoconf autotools-dev awstats base-files base-passwd [...snip...] util-linux vacation vim vim-common wamerican wbritish wget whiptail whois wwwconfig-common zlib1g The following NEW packages will be installed: dash libc6-i686 libparse-recdescent-perl linux-image-2.6-686 linux-image-2.6.18-6-686 linux-image-686 module-init-tools yaird The following packages will be upgraded: hotplug libc6 2 packages upgraded, 8 newly installed, 1 to remove and 277 not upgraded. Need to get 0B/22.7MB of archives. After unpacking 52.1MB will be used. Do you want to continue? [Y/n/?] Writing extended state information... Done Preconfiguring packages ... (Reading database ... 34065 files and directories currently installed.) Preparing to replace libc6 2.3.6.ds1-13 (using .../libc6_2.7-18lenny2_i386.deb) ... Checking for services that may need to be restarted... Checking init scripts... WARNING: init script for postgresql not found. [ --- libc6 config screen appears here --- ] WARNING: POSIX threads library NPTL requires kernel version 2.6.8 or later. If you use a kernel 2.4, please upgrade it before installing glibc. The installation of a 2.6 kernel _could_ ask you to install a new libc first, this is NOT a bug, and should *NOT* be reported. In that case, please add etch sources to your /etc/apt/sources.list and run: apt-get install -t etch linux-image-2.6 Then reboot into this new kernel, and proceed with your upgrade dpkg: error processing /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb (--unpack): subprocess pre-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.7-18lenny2_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Ack! Something bad happened while installing packages. Trying to recover: dpkg: dependency problems prevent configuration of locales: locales depends on glibc-2.7-1; however: Package glibc-2.7-1 is not installed. dpkg: error processing locales (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: locales Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done Now, if I follow the instrunctions as promted I get the following. Note that I am using aptitude instead of apt-get to benefit from the better dependency tracking. I did try with apt-get first. But that let me to the same problem. [green:~]% sudo aptitude install -t etch linux-image-2.6.26-2-686 Reading Package Lists... Done Building Dependency Tree Reading extended state information Initializing package states... Done Reading task descriptions... Done E: Unable to correct problems, you have held broken packages. E: Unable to correct dependencies, some packages cannot be installed E: Unable to resolve some dependencies! Some packages had unmet dependencies. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following packages have unmet dependencies: linux-image-2.6.26-2-686: Depends: initramfs-tools (>= 0.55) but it is not installable or yaird (>= 0.0.13) but it is not installable or linux-initramfs-tool which is a virtual package. Any ideas?

    Read the article

  • Powershell variables to string

    - by Mike Koerner
    I'm new to powershell. I'm trying to write an error handler to wrap around my script.  Part of the error handler is dumping out some variable settings.  I spent a while trying to do this and couldn't google a complete solution so I thought I'd post something. I want to display the $myinvocation variable. In powershell you can do this PS C:\> $myInvocation for my purpose I want to create a stringbuilder object and append the $myinvocation info.  I tried this $sbOut = new-object System.Text.Stringbuilder $sbOut.appendLine($myinvocation) $sbOut.ToString() This produces                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                          86                                 2147483647                                         45 System.Management.Automation.InvocationInfo This is not what I wanted so I tried $sbOut.appendLine(($myinvocation|format-list *)) This produced                                    Capacity                                MaxCapacity                                     Length                                    --------                                -----------                                     ------                                         606                                 2147483647                                        305 Microsoft.PowerShell.Commands.Internal.Format.FormatStartData Microsoft.PowerShell.Commands.Internal.Format.GroupStartData Micros oft.PowerShell.Commands.Internal.Format.FormatEntryData Microsoft.PowerShell.Commands.Internal.Format.GroupEndData Microsoft.Powe rShell.Commands.Internal.Format.FormatEndData Finally I figured out how to produce what I wanted: $sbOut = new-object System.Text.Stringbuilder [void]$sbOut.appendLine(($myinvocation|out-string)) $sbOut.ToString() MyCommand        : $sbOut = new-object System.Text.Stringbuilder                                    [void]$sbOut.appendLine(($myinvocation|out-string))                                      $sbOut.ToString()                    BoundParameters  : {} UnboundArguments : {} ScriptLineNumber : 0 OffsetInLine     : 0 HistoryId        : 13 ScriptName       : Line             : PositionMessage  : InvocationName   : PipelineLength   : 2 PipelinePosition : 1 ExpectingInput   : False CommandOrigin    : Runspace Note the [void] in front of the stringbuilder variable doesn't show the Capacity,MaxCapacity of the stringbuilder object.  The pipe to out-string makes the output a string. It's not pretty but it works.

    Read the article

  • svn diff including annotate/blame-alike information of when changes where made by who

    - by Wouter Coekaerts
    Can you add annotate/blame-alike information to svn diff, so that for every changed line it includes which user and revision changed that line? For example, an annotate-diff comparing revisions 8-10 could output something like: 9 user1 - some line that user1 deleted in revision 9 10 user2 + some line that user2 added in revision 10 The context, lines around it which haven't changed, may be included as well or not, doesn't matter. It's not just a matter of "quickly" writing a shell script combining the output of svn diff and svn annotate. annotate for example will never show you who removed a line. It's also not a matter of doing annotate on a revision in the past: We're not interested in who originally added the line that got removed (that's not the one who "caused" the diff), we want to know who removed it. I suspect the only way to implement something to do this is to inspect each and every commit between the two revisions being compared (and somehow map all the changes in the separate diffs to lines in the total diff)... Does there exist a tool that does something like that?

    Read the article

  • Importing email from other accounts in Gmail

    - by reyjavikvi
    I have a Gmail account that imports (via POP) messages from an old Hotmail account that people still send mail to. I normally read my mail with Thunderbird, not the web interface. The thing is that Gmail imports the mails once per hour (approx), and sometimes I want to know if right now there are messages in my old account. I could just go and hit the "import now" button, but whatever. Is there someplace in the settings where I can change it so the importing happens once each, say, 15 minutes? Alternatively, is it possible to make a script that will tell Gmail to import the messages at that moment, just like it would if I went to the Settings and clicked "Check if I have mail now". I've heard about libgmail for Python, but it doesn't seem like it can do that.

    Read the article

  • how to disable local logins to a mac snow leopard machine?

    - by Wang
    I have a maintenance script that needs to run uninterrupted, so I'd like some way to disable local user logins. Right now, the solution is to send SIGSTP to the loginwindow process, which is suboptimal for several reasons. The most important of them is that the observed behavior is a login prompt that appears to accept the user's credentials but then hangs on a blank desktop before the menu bar or dock or desktop icons appear. This has led to users "fixing" the problem by rebooting the machine. Is there a better way to disable local logins? We currently use iHook, so if there's any way to abort a login from within the login hook, that would integrate nicely with our current setup. Unfortunately, Apple doesn't seem to have documented exactly what would cause Mac OS to abort the login.

    Read the article

  • Solaris cc segfaults when compiling rsync on intel platform

    - by PP
    I am trying to compile rsync-3.0.7 on Solaris 5.10 on an Intel chipset. When running ./configure I see the following (obviously erroneous lines): checking size of int... 0 checking size of long... 0 checking size of long long... 0 checking size of short... 0 checking size of int16_t... 0 checking size of uint16_t... 0 In config.log I see the following lines: configure.sh:5448: /tool/sunstudio12.1/bin/cc -xc99=all -o conftest -g -DHAVE_CONFIG_H conftest.c >&5 "conftest.c", line 123: warning: statement not reached cc: Fatal error in cc : Segmentation Fault configure.sh:5448: $? = 1 configure.sh: program exited with status 1 Segmentation fault? What could be causing a simple test script to segfault during compilation?

    Read the article

  • Are Plesk server backups useful?

    - by Michael T. Smith
    I'm working for a startup now, and I'm the programmer. Because of our small team size, I'm also handling the server management for now (until we get a dedicated server administrator.) I've never used Plesk before, and the server we're using (a Media Temple Dedicated Virtual server) had it installed when I got here. One of my first jobs was to set up backups: Plesk was already running it's nightly server-wide backups. I created a small script to dump the web app, it's DBs and any assets, tar them, store them, and then copy them to another small server we have (to backup the backups.) But, we're constantly running into hard drive space issues because of the Plesk backups. And I'm wondering, are they useful? If I have the web app and all of it's assets, I could easily enough get another server up and running. Do we need to keep running Plesk's backups? Thoughts?

    Read the article

  • Simple web-frontend for remote svn administration?

    - by Stefan Lasiewski
    We run a SVN repository. Some of our more advanced users need to be able to perform some SVN administration without relying on the system administrator. They need to be able to do things like create SVN repositories, delete SVN repositories,, and perform commands like 'svnadmin dump' and 'svnadmin load'. We'd like to avoid SSH access on these FreeBSD machines, and would rather provide a service interface through a Web UI. I'm looking for a simple script (or a small number of scripts) which use Perl or PHP. I found svnadmin or svnadmin.pl, but was hoping to find something with a larger user community or which has been recommended by others. It looks like Trac allows SVN administration, but comes with may more features then we need.

    Read the article

  • RTL8168B/8111B Lan card is not detected in Redhat..Error is make ***/lib/modules/2.6.18-53.e15/build

    - by Deepak Narwal
    0 Hello friends... In My computer Lan card model is Realtek RTL8168B/8111B PCI-E GIGABIT ETHERNET NIC (NDIS 6.20) My system is dual boot windows 7 and redhat 5.1.Redhat is not picking up this model of Lan card automaticlly. I tried it by downloading from realtak site for this particular model and find some .tar packages for my kernal and when i tried to install them ... check old drivers & unload it build the module and install make */lib/modules/2.6.18-53.e15/build: no such file or directory stop make[1]: *[modules] error 2 make : [modules] error 2 i downloaded tar files from sites and unpack according to their instrution i tried to run autorun.sh script as mentioned in readme file but after doing this it is showing above error... Now what to do i am not getting

    Read the article

  • How to find the computer name a user logged on to

    - by V. Romanov
    Hi guys Is there a tool or script or some other way of knowing what computer name a specific user is currently logged on to? Or even was logged on to? Say the user "HRDrone" is working on his machine whose hostname is "HRStation01". I, sitting at my sysadmin desk, only know that the username is "HRDrone". Any way i can find out that he is logged on to "HRStation01" without asking the user? AD event viewer? anything? Thanks!

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >