Search Results

Search found 33445 results on 1338 pages for 'single instance storage'.

Page 784/1338 | < Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >

  • Improve efficiency when using parallel to read from compressed stream

    - by Yoga
    Is another question extended from the previous one [1] I have a compressed file and stream them to feed into a python program, e.g. bzcat data.bz2 | parallel --no-notice -j16 --pipe python parse.py > result.txt The parse.py can read from stdin continusuoly and print to stdout My ec2 instance is 16 cores but from the top command it is showing 3 to 4 load average only. From the ps, I am seeing a lot of stuffs like.. sh -c 'dd bs=1 count=1 of=/tmp/7D_YxccfY7.chr 2>/dev/null'; I know I can improve using the -a in.txtto improve performance, but with my case I am streaming from bz2 (I cannot exact it since I don't have enought disk space) How to improve the efficiency for my case? [1] Gnu parallel not utilizing all the CPU

    Read the article

  • How do you create a SQL query in Excel 2007 with a dynamic date range?

    - by Jordan
    I am trying to create a reporting spreadsheet that can print reports for a given time period. The query below works, but when I try to use a "?" parameter in place of the date, I get an error after selecting a cell containing my date. If I use single quotes ('?') I get a conversion from string to date/time failure, if I don't (?) I get a syntax error near @p1. Eventually I will need either a start and end date or a formula adding a month or shift to the starting date/time to filter the data down to important information. The query was built in Microsoft Query. SELECT FloatTable.DateAndTime, TagTable.TagName FROM master.dbo.FloatTable FloatTable, master.dbo.TagTable TagTable WHERE FloatTable.TagIndex = TagTable.TagIndex AND ((FloatTable.DateAndTime={ts '2012-06-01 00:00:00'})) Any assistance would be much appreciated. Thanks in advance.

    Read the article

  • Is Apache ReverseProxy to Passenger Standalone an acceptable production deployment?

    - by davetron5000
    I have the need to deploy Rails 3 apps, using RVM and gemsets, and am expecting “public” traffic (i.e. this is not an internal-only app). I also must use Apache as the public interface to my app. I understand that Passenger Standalone can help accomplish the rails/RVM end, and I have successfully set it up in my development environment. My question is how viable this setup is for a production deployment. Is deploying via Apache configured to ReverseProxy to my passenger-powered Rails app going to create problems? Since I'm designing the production deployment now, I want to understand if I should spend the additional time to set up Passenger connected to Apache and have that Passenger communicate with Passenger Standalone instance running my Rails app. So, I'm looking for one of I guess three answers: Apache Reverse Proxy to Passenger Standalone will be generally fine You should not use the Apache/Passenger Standalone configuration, but set up Passenger on the Apache side as well Your entire setup is just Wrong, please RTFM (and include link to "FM")

    Read the article

  • Why does my Visual Studio 2010 default to a horizontal windows split if I quit then reopen it?

    - by Martin Doms
    I use Visual Studio 2010 Professional at work and up until a couple of weeks ago I have had no problems. But now whenever I open an instance of VS 2010 it defaults to horizontal split. I never split my windows horizontally, so this is very annoying. It happens consistently, every time on every project. Here is how VS2010 looked before I closed the window: I close it and reopen in that project, and: Arg! The only plugin I use is ReSharper, in case it's relevant.

    Read the article

  • How to correctly write an installation or setup document

    - by UmNyobe
    I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : Either the module installation is completely different (for instance now there is an rpm, or a different OS) Either names changed, and i need to switch old names by new names Description of the purpose of the current step missing. Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment.

    Read the article

  • Windows user trying install Git on Solaris

    - by nahab
    Is there simply way to install Git on Solaris as on Windows without installing any side libraries and compiling source files? And if not, why? UPD. Yes I'm looking for single package that will be easy to install. We have ~8 solaris zones using for development those we need simple way to install git fast on they. Installation should be easy because each member of team possibly will be do it and it should be fast because of big count of zones.

    Read the article

  • Powershell enters foreach loop with null object

    - by SteB
    I'm listing all backups in a given directory: $backups = Get-ChildItem -Path $zipFilepath | Where-Object {($_.lastwritetime -lt (Get-Date).addDays(-7)) -and (-not $_.PSIsContainer) -and ($_.Name -like "backup*")} If I set it to deliberately return no files (.addDays(-600) then the following prints "Empty" (as expected): if (!$backups) { "Empty" } If I try to list the names with an empty $backups variable: foreach ($file in $backups) { $file.FullName; } I get nothing (as expected), if I change this to: "test"+ $file.FullName; then I get a single "test" under the "Empty". How is this possible if $backups is empty?

    Read the article

  • Url-based web site publishing on Windows Server platform

    - by Maxim V. Pavlov
    I have a Windows 2008 Enterprise SP2 server in a datacenter. It is 32bit OS. I need to be able to do a "smart" url-based web site publishing. So that with a single external IP I can publish many sites on port 80, and some firewall logic resolves, based on a requested URL, which site in IIS gets the request. Forefront TMG 2010 has this feature, but it is not supported on 32bit systems. Is there a software solution that can satissfy my need on Windows 2K8 platform? Thank you. P.S. Perhaps there is a workaround or a tweak to do what I need in IIS?

    Read the article

  • How to make sure clients update their browser cache when my website is updated?

    - by user64204
    I am using the HTTP 1.1 Cache-Control header to implement client-side caching. Since I update my website only once a month I would like the CSS and JS files to be cached for 30 days with Cache-Control: max-age=2592000. The problem is that the 30-day period defined by Cache-Control doesn't coincide with the website update cycle, it starts from the moment the users visit the site and ends 30 days later, which means an update could occur in the meantime and users would be running with outdated content for a while, which could break the rendering of the website if for instance the HTML and CSS no longer match. How can I perform client-side caching of content for periods of several days but somehow get users to refresh their CSS/JS files after the website has been updated? One solution I could think of is that if website updates can be schedule, the max-age returned by the server could be decreased every day accordingly so that no matter when people visit the website, the end of caching period would coincide with the update of the website, but changing the server configuration every day goes against one of my sysadmin principles (once it's running, don't touch it).

    Read the article

  • How could all of my hard drives fail at once?

    - by Taylor
    I have an Ubuntu 13.04 server. Today I found the box had crashed. I restarted it, and now every single hard drive's partition table is missing. (1 SSD for /boot, /, and 3 2TB drives for RAID). I have the SSD connected to a laptop VIA USB-SATA cable, and sure enough, the partition table is missing. This tells me that the Motherboard / SATA controller / software actually broke the drives, not that they just can't be read correctly. Something similar happened to only the SSD a few months ago, and I was forced to just re-partition it. How the heck could his have happened? Bad Motherboard or SATA controller?

    Read the article

  • Synchronisation software to find files that have moved paths within a folder?

    - by kpierce8
    Say I have a pictures folder which I reorganized on one computer. I'd like to use that directory as the base and compare it with another version on a backup drive. Will any Synchronisation/compare program find that a file in one folder has moved locations within a compare folder? For instance, say I reorganized my pictures from trips into folders by year with the trips folders inside each year folder. If I use a regular compare utility I wind up with two copies of everything that's moved in different locations.

    Read the article

  • Amazon EC2 Socket connection not being accepted

    - by Joseph
    I am trying to run a java application on my EC2 instance. The application accepts socket connections on port 54321. If I try and connect to it, it times out. My Security Group is set as: TCP Port (Service) Source Action 21 0.0.0.0/0 Delete 22 (SSH) 0.0.0.0/0 Delete 80 (HTTP) 0.0.0.0/0 Delete 20393 0.0.0.0/0 Delete 54321 0.0.0.0/0 Delete Is there anything else I need to do? # iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination # iptables -nvL -t nat Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination #

    Read the article

  • Repeat the csv header twice without "Append" (PowerShell 1.0)

    - by Mark
    I have prepared a PowerShell script to export a list of system users in CSV format. The script can output the users list with Export-csv with single header row (the header row at top). However my requirement is to repeat the header row twice in my file. It is easy to achieve in PowerShell 3.0 with "Append" (e.g. $header | out-file $filepath -Append) Our server envirnoment is running PowerShell 1.0. Hence I cannot do it. Is there any workaround? I cannot manually add it myself. Thank you.

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • iptables, allow access from certain MAC addresses

    - by user788171
    Presently, I limit which clients can access my server by using IP addresses via iptables, only approved IP addresses can connect. However, the problem with this is if a client is on a laptop and goes to a different location, they can no longer connect because the IP has changed. For a variety of reasons, iptables authentication is the only option I have. Is there a way to restrict access by device instead of ip address. For instance, only allow certain MAC address to connect to port 5000. Is it possible to do this via iptables? Note, the computers are not on the same network, they could be connecting from anywhere in the world.

    Read the article

  • How to stream multiple files on demand in VLC?

    - by romkyns
    Is there any way at all that I can set up VLC on a server PC in such a way that I can access a list of all my videos from another PC, and pick one to be streamed on demand? I've been pointed at this streaming guide (pdf), but it's pretty useless. For a start, most of the menus in those screenshots don't match the actual current version VLC, and then it sort of assumes you already know what you're doing. So far I managed to figure out how to stream a single file, which I must choose before watching on the server PC - pretty useless if you ask me! The impenetrable "UI" doesn't help either... (P.S. The reason I'm going for streaming rather than the very simple to set up network drive is described in this question)

    Read the article

  • QR Codes and Short Links - Please Take A Look [closed]

    - by Joe Turner
    I'm looking for a way to create a QR Code and a shortened link when a form is submitted. I have the QR Code bit, but the link is too long for me and the QR Code looks scary and complicated. The way it works is; the user types in (in this instance) a contract number. Then, a folder is created on the server of that contract number. (www.mysite.com/QR/$contractnumber). Then, using PHP again, I create a QR Code through Google because I know that every QR code will be linking to the same place, just a different ending of the link. The only bit that changes is the $POST... I was wondering if there was a way to shorten the link before it goes to Google? It would have to be through php. The user enters the contact number in the form, then that number(usually around 5/6 digits) will be entered into a already existing command? I'm not an expert in anything, I just know some really random snippets of code... And HTML and CSS, of course. Any help would be appreciated and judging by the few days I have been searching this, I think it might help a few people in the future. I would also like to confirm that the solution can't be one of this visual URLShorteners. If it is, it just needs to be the back-end of it, built into a existing form and QR Generator. Simple?

    Read the article

  • How can I launch a GUI session on a remote Ubuntu Desktop via SSH from a non-GUI Linux shell?

    - by Vihung
    I am setting up a test environment, made up of various Linux boxes, and I have the need to launch an instance of Firefox on a remote machine via ssh. The remote machine has Ubuntu Desktop (11) and Firefox installed. The source machine is a Continuous Integration server and it creates an ssh session to the remote machine from a non-GUI environment. It then runs a script, which tries to launch Firefox on the remote machine. However, since the ssh session is a from a non-GUI environment, there is no display. Is it possible to have a headless X-windows display? i.e. a virtual display in the remote environment for Firefox to run in? What options do I have?

    Read the article

  • Windows 8 Modern UI searching in files doesn't work

    - by Peter Jansen
    I have a problem with my search in Windows 8. When I search through the Modern UI style (WinF) for files, it won't return a single result from none of my drives. Searching via Windows Explorer works fine. I had the same problem in Windows 8 Consumer Preview, but it worked in Developer Preview. And I looked on the net for other users with similar problems, but I haven't found anything. Is there someone who knows what the problem might be?

    Read the article

  • Building intranet search

    - by gmkv
    At work, we have lots of information squirreled away in many different sites -- wikis, product docs, ticketing system, etc -- many of which require authentication. I'm very interested in having a single way to search all our various silos, and in my spare time have looked at Nutch, Grub, Django + Haystack, etc. None of these is a complete solution a la Google Mini or Google Search Appliance. Has anybody built a basic intranet search engine out of a mixture of these tools? Would you have recommendations about how to go about it? I like Django, and Haystack seems to be a mildly popular search solution for it, but I'd need to wire up a crawler that can support crawling authenticated sites to it.

    Read the article

  • Linux - Network Sharing a local NTFS usb drive

    - by Jonathan Rioux
    I have an external hard drive formated in NTFS which I would like to be able to access by the network. I want to make a network share out of it. I also have a Debian machine running in my house and I then got an idea. I want to plug in my external hard drive (usb) into my Debian machine, and make a windows share with it, maybe with Samba, so I will be able to access it from my Windows 7 laptop and see it as a network share. Additionally, how can I restrict specific folders of that network share, and allow only specific folders to specific users? For instance, I would like to give my girlfriend access to a folder of her name so she can put her files and so she wont be able to see the stuff in my folder...

    Read the article

  • If a change the CPU, must I reinstall the OS?

    - by dag729
    Hi, as suggested by the title, I want to change CPU: actually I have two computers, one with Ubuntu running on an AMD Athlon 64 dual core 5200+ and the other with FreeBSD running on an AMD Sempron single core LE-1250. I would like to swap (I am not sure that this is the correct term...) the CPUs from one computer to the other one, that is take the dual core from the ubuntu pc and put it inside the freebsd pc and viceversa. The mobo is the same. Do you think I will encounter problems?

    Read the article

  • When using MVVM, should you create new viewmodels, or swap out the models?

    - by ConditionRacer
    Say I have a viewmodel like this: public class EmployeeViewModel { private EmployeeModel _model; public Color BackgroundColor { get; set; } public Name { get { return _model.Name; } set { _model.Name = value; NotifyPropertyChanged(Name); } } } So this viewmodel binds to a view that displays an employee. The thing to think about is, does this viewmodel represent an employee, or a "displayable" employee. The viewmodel contains some things that are view specific, for instance the background color. There can be many employees, but only one employee view. With this in mind, when changing the displayed employee, does it make sense to create a new EmployeeViewModel and rebind to the view, or simply swap out the EmployeeModel. Is the distinction even important, or is it a matter of style? I've always leaned toward creating new viewmodels, but I am working on a project where the viewmodels are created once and the models are swapped out. I'm not sure how I feel about this, though it seems to work fine.

    Read the article

  • Config Time Service on Server 2008 DC using Group Policy Only

    - by Ed Fries
    I want to configure the Time Service using only GP in a Server 2008 R2 domain. I have created a GP as follows: Computer Config, Policies, Administrative Templates, System, Windows Time Policy: =Global Configuration Settings -Enabled w/ default settings. Computer Config, Policies, Administrative Templates, System, Windows Time Policy,Time Providers: =Configure Windows NTP Client -Enabled w/ default settings. =Enable Windows NTP Client -Enabled w/ default settings. =Enable Windows NTP Server -Enabled w/ default settings. The policy is linked, enforced and applied to Domain Controllers OU. The GP modeling results shows the policy is in effect on the DC (Single DC domain) and the DC is recognized as the PDC emulator. I have run gpupdate /force and logged off/on. The issue is that the DC shows the time source as internal. I understand I can force this at the cmd line using w32tm to set the peer but I would like to understand what is missing in the GP. The default NTP Client GP setting includes time.windows.com,0x9 as the source but it does not appear to be taking effect.

    Read the article

  • howt setup remote access into computer behind 2 routers?

    - by Steve Wasiura
    I can setup remote access to a pc behind a single router/firewall by using NAT and Port Forwarding, simples! But there is a customer that shares an internet connection with another office, and they are behind a second router firewall. I drawed a picture with my crayons but I can't attach it because I'm a new account on SF. see it here: http://i.imgur.com/b3FDx.png So how would I setup remote access to the pc that is beind the second firewall? It must be something about static routes, i.e. if I hit the wan ip on port 4905 I want it to forward it to 10.0.0.30 by going through 192.168.1.10 so a route statement like for all requests to 10.0.0.30, use 192.168.1.10 ? and ass u me router 2 has a static gateway ip of 192.168.1.10 and need a standard NAT on router 2 to point port 4905 to 10.0.0.30 is this the right way, any tips? both routers are netgear consumer equipment. thanks

    Read the article

< Previous Page | 780 781 782 783 784 785 786 787 788 789 790 791  | Next Page >