Search Results

Search found 47712 results on 1909 pages for 'looking for a script'.

Page 666/1909 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • Why does the Git community seem to ignore side-by-side diffs

    - by Kyle Heironimus
    I used to use Windows, SVN, Tortoise SVN, and Beyond Compare. It was a great combination for doing code reviews. Now I use OSX and Git. I've managed to kludge together a bash script along with Gitx and DiffMerge to come up with a barely acceptable solution. I've muddled along with this setup, and similar ones, for over a year. I've also tried using the Github diff viewer and the Gitx diff viewer, so it's not like I've not given them a chance. There are so many smart people doing great stuff with Git. Why not the side-by-side diff with the option of seeing the entire file? With people who have used both, I've never heard of anyone that likes the single +/- view better, at least for more than a quick check.

    Read the article

  • ubuntu preseed installation keep missing mirror files

    - by JackWu
    Install ubuntu12.04.2 with preseed file, but there is one buggy problem about preseed mirror setting. The symptom here is installing process got stuck. So I track down the log file, and find out the real problem, the installation is looking for a file that's not there. This is just one of them, another pops up if I faked this file. This all happened during preseed, so I believe preseed has something to do with this. I google ubuntu preseed mirror and find this post saying: # If you select ftp, the mirror/country string does not need to be set. #d-i mirror/protocol string ftp d-i mirror/country string manual d-i mirror/http/hostname string archive.ubuntu.com d-i mirror/http/directory string /ubuntu d-i mirror/http/proxy string # Alternatively: by default, the installer uses CC.archive.ubuntu.com where # CC is the ISO-3166-2 code for the selected country. You can preseed this # so that it does so without asking. #d-i mirror/http/mirror select CC.archive.ubuntu.com # Suite to install. #d-i mirror/suite string lucid # Suite to use for loading installer components (optional). #d-i mirror/udeb/suite string lucid # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted I wonder the difference between d-i mirror/http/hostname and d-i mirror/http/mirror, I mean they all specify a mirror, right? In my preseed file, this is no d-i mirror/http/mirror, and d-i mirror/http/hostname points to my own repo as you might notice in the previous image. Here is my question: Does preseed fetches file/resource from internet, if I use local repo? Why it's looking for file that's not even there? This has bothered for quite time, many thanks in advance to anyone who might give any help.

    Read the article

  • How to make audio and video streaming servers work?

    - by Santosh Linkha
    I am PHP MySQL developer and I am interested in the way television and radio are broadcasted over Internet live. I want to know how it works and and what are its requirements (which package of which programming language offers the best). And please clarify me: Websites are stored in servers. From my desktop, if I want to broadcast some video, then I need to connect to webserver (to upstream the video). Is there an application to do that (or do I have to code that or embed in my web application and which programming language would be suitable (does Python support that))? And I also need a script to handle the upstreamed video or audio (can I do that with PHP)?

    Read the article

  • Most cost efficient way to backup Subversion data to S3?

    - by sludge
    I'm looking at using S3 as an offsite backup repo for my Subversion database. When I dump my SVN database, it's about 10 gigabytes. I would like to avoid the charge of uploading that data repeatedly. The anatomy of this large file such that new changes to Subversion modify the tail of the file, with everything else staying the same. Because Amazon S3 does not allow you to "patch" files with changes, I will have to upload ten gigs every time I instantiate a backup after doing a simple submit to Subversion. Here are the options as I see them: Option 1 I am looking at duplicity which has --volsize which splits data over an amount of megs. Is it possible to split the Subversion dumps using this so further incremental backups are measured in megabytes? Option 2 Can I just backup the hot subversion repository? This seems like a bad idea if it is in the middle of writing a submit. However, I have the option of taking the repo offline between the hours of midnight and 4am. Each revision in my Berkeley DB uses a file as its record.

    Read the article

  • Backup a Single Table in SQL Server using SSMS

    - by Greg Low
    Our buddy Buck Woody made an interesting post about a common question: "How do I back up a single table in SQL Server?" That got me thinking about what a backup of a table really is. BCP is often used to get the data but you want the schema as well. For reasonable-sized tables, the easiest way to do this now is to create a script using SQL Server Management Studio. To do this, you: 1. Right-click the database (note not the table) 2. Choose Tasks > Generate Scripts 3. In the Choose Objects pane,...(read more)

    Read the article

  • Catching typos or other errors in web-based scripting languages

    - by foreyez
    Hi, My background is mainly strongly typed languages (java, c++, c#). Having recently gotten back to a bit of javascript, I found it a bit annoying that if I misspell something by accident (for example I'll type 'myvar' instead of 'myVar') my entire script crashes. The browser itself most of the time doesn't even tell me I have an error, my program will just be blank, etc. Then I have to hunt down my code line by line and find the error which is very time consuming. In the languages I am used to the compiler lets me know if I made a typo. My question to you is, how do you overcome this issue in scripting (javascript)? Can you give me some tips? (this question is mainly aimed at people that have also come from a strongly typed language). Note: I mainly use the terminal/VIM ... this is mainly b/c I like terminal and I SSH alot too

    Read the article

  • VBA Solution to VLOOKUP with Hyperlinks

    - by Emily2
    I am looking for some help with a VBA solution for preserving hyperlinks when using VLOOKUP on Excel (2010). I have a load of data on Sheet 1 for internal use only, and a cut-down version of this on Sheet 2. Instead of recreating Sheet 2 everytime, I am looking to have a working version which updates everytime Sheet1 is updated. Thus, I have used VLOOKUP on Sheet 2 so that only the desired info is returned on sheet 2. However, the problem was that sheet 1 contained in many cells Hyperlinks to external websites, and this would not pull through to Sheet2 using VLOOKUP. With some help, however, using the following VBA solution the hyperlinks now pull through: Function GetHyperLink(r As Range) As String If r.Hyperlinks.Count Then GetHyperLink = r.Hyperlinks(1).Address End If End Function And I am using the following formula in the relevant cell(s) in Sheet2: =HYPERLINK(GetHyperLink(INDEX('Sheet 1'!$B$1:$B$10001,MATCH(A4,'Sheet 1'!$A$1:$A$10001,0))),(VLOOKUP(A4,'Sheet 1'!$A$1:$B$10001,2,FALSE))) However, the problem is with formatting: every cell on Sheet2 is formatted blue and underlined, even although some of them do not contain a hyperlink! Is someone able to help with a VBA solution/formula to fix this last piece of the puzzle? Many thanks, in anticipation.

    Read the article

  • Blocking path scanning

    - by clinisbut
    I'm seeing in my access log a number of request very suspicious: /i /im /imaa /imag /image /images /images/d /images/di /images/dis They part from a known resource (in the above example /images/disrupt.jpg). All comming from same IP. Requests varies from 1/sec to 10/sec, seems somewhat random. It's obviously they are trying to find something and seems they are using a script. How do I block this kind of behaviour? I though of blocking the IP request, at least for a given time. Keeping in mind that: Request intervals seems legitimate (at least I think so). I don't want to end blocking a search engine bot, which may find 404 urls too (and that's a different problem, I know). ¿Do they use always same IP?

    Read the article

  • Mac dev folder missing, SSH not working

    - by SamGoody
    A few days ago, SSH stopped working. When I try logging in a get the following message: PTY allocation request failed on channel 0 stdin: is not a tty fatal: unrecognized command '' Connection to 74.52.61.194 closed. Web searches have shown me that there might be something wrong with /dev/std. But my computer lacks a /dev/ drive. There is an Alias to /dev/ [hidden, but I've revealed hidden files to do this search], but when I try to open it I am told that it cannot find the folder it is aliasing. Now, many a web search tells me that without a dev folder, the computer doesn't work, but it does seem to work, except the SSH. Also, are there any tools that can save my SSH preferences so that I don't have to, each time, type out the username@adrees, password, path all of which are long and complex? Not looking for a Filezilla type client, there are many of those. Looking for a command line like putty, that lets me use bash on the remote client. Am on Macbook Pro, latest version of Tiger.

    Read the article

  • Is there other ways to do insert/update/delete on a remote oracle database?

    - by gunbuster363
    I asked a question recently concerning the speed of execution of insert/update/delete using JDBC driver in a remote machine, but the problem cannot be solved easily. I would like to ask, is there any other way to execute the insert/update/delete to the oracle? The current situation is this: the DB is on a seperate machine than the java program used to update the DB. I looked up the internet and found people suggesting using pure sql or pl/sql to do the update, is that possible? And do we need to operate the sql or pl/sql in a local machine? Because I have no knowledge about pl/sql, so I am not sure if we can create some kind of script and call it on a remote machine. Let say the situation is like this: the input data is on machine A, and the original java program are also on machine A, but the oracle is on machine B. is there any other approach other than JDBC?

    Read the article

  • Configuring SQL Server Express Edition for remote access

    - by rohancragg
    Originally posted on: http://geekswithblogs.net/rohancragg/archive/2013/07/24/configuring-sql-server-express-edition-for-remote-access.aspxI wanted to access SQL Express on my local machine from within a Client Hyper=V virtual machine on the same Domain. This article got me most of the way there: http://akawn.com/blog/2012/01/configuring-sql-server-2008-r2-express-edition-for-remote-access/ But it was a bit out of date. My steps were: Enable TCP/IP Protocol in SNAC Restart SQL Server Configure (Windows 8) Firewall to allow all Inbound for sqlservr.exe Footnote: I thought this might be relevant (nice to be able to script it): http://support.microsoft.com/kb/968872/en-us But the problem is that this is for fixed ports and not compatible with the (default) Dynamic Ports settings above.

    Read the article

  • central apache log analysis of many hosts

    - by Jason Antman
    We have 30+ apache httpd servers, and are looking to perform analysis on the logs both for historical trending and near "real time" monitoring/alerting. I'm mainly interested in things like error rates (4xx/5xx), response time, overall request rate, etc. but it would also be very useful to pull out more compute-intensive statistics like unique client IPs and user agents per unit of time. I'm leaning towards building this as a centralized collector/server/storage, and am also considering the possibility of storing non-apache logs (i.e. general syslog, firewall logs, etc.) in the same system. Obviously a large part of this will probably have to be custom (at least the connection between pieces and the parsing/analysis we do), but I haven't been able to find much information on people who have done stuff like this, at least at shops smaller than Google/Facebook/etc. who can throw their log data into a hundred-node compute cluster and run Map/Reduce on it. The main things I'm looking for are: - All open source - Some way of collecting logs from apache machines that isn't too resource-intensive, and transports them relatively quickly over the network - Some way of storing them (NoSQL? key-value store?) on the backend, for a given amount of time (and then rolling them up into historical averages) - In the middle of this, a way of graphing in near-real-time (probably also with some statistical analysis on it) and hopefully alerting off of those graphs. Any suggestions/pointers/ideas, to either "products"/projects or descriptions of how other people do this would be greatly helpful. Unfortunately, we're not exactly a new-age-y devops shop, lots of old stuff, homogeneous infrastructure, and strained boxes.

    Read the article

  • Copying windows home server backup offsite

    - by Simon
    What ways are there to copy a windows home server backup to an offsite location? I'm talking specifically (and only) about the automated backup of my entire machine, and not the shared network folders. I am 90% working away from home on my laptop which has a 640GB drive so the shared folders are essentially useless to me. I backup every night, but if my house burns down or broken into the I'm in serious serious trouble ! I'm really looking for some alternative way to back up my entire machine - which much not interfere with the reliability or speed by which my WHS backs up my laptop every night. Either a way to 'export' a complete machine backup from the server, or recommendations on non-conflicting software I can backup to a 1TB drive at work are what I'm looking for. Note: I believe that WHS uses its own completely proprietary backup and doesn't use things like any 'backup bit' or 'archive bit'. I just dont want to install some other backup software that will conflict. PS I'm now running Windows 7 and just realized that I should probably check out the backup functionality it gives me. I assume that won't conflict right! Edit: Thanks for the hosted solutions. I'd also appreciate ways to backup to an 'offsite' location that I control - like my office vs. my home. The hosted solutions I think will be too slow or expensive for my needs.

    Read the article

  • How to cache on CloudFlare images that are served to client as JSON?

    - by Askar Ibragimov
    I am using a gallery on my website that gets list of images from a JSON sent by a php script. So, the javascript gallery calls PHP backend and it replies with complex JSON where images are specified as object fields. These fields not necessarily include full URLs, merely a path to needed images. I'd like to use Cloudflare and want these images to be cached there. How I could learn whether these are cached or not, and make sure that these would be cached and not considered some sort of dynamic content?

    Read the article

  • Is there any good reason I would want my website to be framed?

    - by minitech
    I'm building a website that's not security-critical in any way at all, so having somebody put a page in an <iframe> is not particularly dangerous to its users. However, as my website doesn't have script plugins that will be used anywhere else, is there any reason why I shouldn't just apply: X-Frame-Options: Deny to every page on my website? Is there any valid reason for any other website to embed mine? I've seen plenty of content-stealing ones and attempts to hijack user accounts, but never an actual good usage of frames that's not an explicit feature of the website.

    Read the article

  • Announcement: How-To Series Explaining Customizations Step By Step

    - by Oliver Steinmeier
    Yesterday we officially launched our new YouTube channel.  Today we are announcing another initiative that we have been working on for a while: to help you learn common customization tasks, we are going to publish a series of detailed How-To documents with lots of screenshots.  Many of these will also be the script for a YouTube video, giving you the choice to see it in action or go through the steps yourself guided by a PDF document. The focus of the initial set of How-Tos will be JDeveloper/ADF customizations, but over time we will expand into other areas.  Today's first document is meant to get everyone up to the point where a JDeveloper environment is up and running: a white paper that shows you how to set up JDeveloper, configure the integrated WLS domain, and make a very, very simple customization work. As always we are looking for your feedback.  Please let us know whether this is helpful for your work or learning, and what use cases you would like to see us document in these How-Tos.

    Read the article

  • What are these isolated resource requests in Apache's access_log?

    - by Greg
    I was looking at my Apache access log and came across some strange requests. A single IP address will access several resources (mostly css style sheets and images), but no actual pages. Sometimes they are requesting a resource that no longer exists on the server, or one that is still under the web root but no longer used (e.g. a resource in an old WordPress theme). Also: The requests list no referrer I get no useful information on the IP address by looking it up There doesn't seem to be any pattern among the IP addresses that are making these requests (e.g. different countries) Are these just links from a stale cache somewhere? Could it be a sign of an attack of some sort? Here is a typical example: GET /wp-content/themes/my-theme/images/old-image.gif HTTP/1.1" 500 809 "-" "Mozilla/4.0 (compatible;)" This was one of about 10 similar requests, some for existing resources, some for older resources. There is no other sign of this IP address in access_log. Note the internal server error, which is a topic for a different thread. What I'm asking here is where would isolated requests like this come from?

    Read the article

  • Proper configuration for Windows SMTP Virtual Server to only send email from localhost, and tracking down source of spam emails

    - by ilasno
    We manage a server that is hosted on Amazon EC2, which has web applications that need to be able to send outgoing email. Recently we received a notice from Amazon about possible email abuse on that server, so i've been looking into it. It's Windows Server Datacenter (2003, i guess), and uses SMTP Virtual Server (you know, the one that requires IIS 6 for admin). The settings on the Access tab are as follows: - Authentication: Anonymous - Connection: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) - Relay: Only from 3 ip addresses (127.0.0.1 and 2 others that refer to that server) In the SMTP logs there are many entries like the following: 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 FROM: 0 0 4 0 26364 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionCommand SMTPSVC1 TO: 0 0 4 0 26536 SMTP - - - - 2012-02-08 23:43:56 64.76.125.151 OutboundConnectionResponse SMTPSVC1 250+ok 0 0 6 0 26707 SMTP - - - - ([email protected] is sending quite a lot of emails :-/) Can anyone confirm if the SMTP server settings seem correct? I'm also wondering if a web application on the machine could be exposing a contact form or something that would allow this sort of abuse, looking into that (and how to look into that) further.

    Read the article

  • Abandoment to blame for the last JavaScript file not always being loaded?

    - by Larsenal
    I have a code snippet for an app that users are loading as a 3rd party script on their site. The general sequence is as follows: Site loads http://www.example.com/foo.js foo.js does stuff 1 to 2 seconds later, foo.js loads bar.js Now in a perfect world, I'd want to see matching counts for the calls to foo.js and bar.js. However, bar.js loads only about 94% of the time. I'm wondering how much of this discrepancy might be attributable to site abandonment given the fact that bar.js is delayed by 1 or 2 seconds. I posted here instead of StackOverflow since I think it's more a question about what would be typical time on page when users abandon the page.

    Read the article

  • How can I avoid my web browser from redirecting to localhost using WAMP in Windows7?

    - by Josh
    I'm currently using Windows 7 with WAMP to try and work on some software, but my web browsers will not accept cookies from the "localhost" domain. I tried creating a few bogus domains in my hosts file by pointing them to 127.0.0.1 but when I type them in I am automatically redirected back to localhost. I have also configured virtualhosts in apache to correspond with the domains I added to the hosts file and it still redirects back to localhost. Is there anything special I must do on Windows 7 to get around this localhost redirect? Thanks for looking :) I'll include my host file here: # Copyright (c) 1993-2009 Microsoft Corp. # # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. # # This file contains the mappings of IP addresses to host names. Each # entry should be kept on an individual line. The IP address should # be placed in the first column followed by the corresponding host name. # The IP address and the host name should be separated by at least one # space. # # Additionally, comments (such as these) may be inserted on individual # lines or following the machine name denoted by a '#' symbol. # # For example: # # 102.54.94.97 rhino.acme.com # source server # 38.25.63.10 x.acme.com # x client host # localhost name resolution is handled within DNS itself. # 127.0.0.1 localhost # ::1 localhost 127.0.0.1 magento.localhost.com www.localhost.com Thanks for looking :)

    Read the article

  • Webcast Replay: Extreme Performance for Consolidated Workloads with Oracle Exadata

    - by kimberly.billings
    If you missed our live webcast Extreme Performance for Consolidated Workloads with Oracle Exadata last week, the replay is now available. Watch the free on-demand webcast in which Tim Shetler, Vice President of Oracle Database Product Management, and Richard Exley, Consulting Member of Technical Staff, discuss how Oracle Exadata can help you can significantly improve application performance and reduce infrastructure costs by consolidating transaction processing, data warehousing, or mixed workloads on Oracle Exadata. Note: (1) Turn off pop-up blockers if the slides do not advance automatically. (2) Slides are available for download. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Is it possible to auto-mount sshfs

    - by Mark D
    Is it possible to auto-mount a remote FS using sshfs upon instantiating a proper VPN connection? Allow me to explain the scenario, I'm working remotely, to do that it helps if I can mount my home dir from a server in the office. To do that I need to vpn in. So within network manager I select the relevant VPN and connect. It connects but now I have to drop to the command line and mount my home dir on several machines. If I forget to do one machine my local dev environment isn't as efficient. I suppose I could write a quick bash script to do this but I'd rather get it running automatically when I connect.

    Read the article

  • How to configure path or set environment variables for installation?

    - by Orr22
    I'm aiming to install APE in Ubuntu 14.04 LTS, a simple code for pseudopotential generation. I'm having this error message while running ./configure: checking for gsl-config... no checking for GSL - version >= 1.0... no *** The gsl-config script installed by GSL could not be found *** If GSL was installed in PREFIX, make sure PREFIX/bin is in *** your path, or set the GSL_CONFIG environment variable to the *** full path to gsl-config. configure: error: could not find required gsl library I checked and I have the GSL already installed: :~/Programas/ape-2.2.0$ dpkg -l | grep gsl ii libgsl0ldbl 1.16+dfsg-1ubuntu1 i386 GNU Scientific Library (GSL) -- library package So I have the library but the program installation isn't finding it. Any help? Thanks in advance

    Read the article

  • error: you need to load kernel first

    - by Angelos318
    I made a clean install on my Sony Vaio laptop, of Ubuntu 11.10 and when the installation was ready, it prompted to remove the usb I was installing the distro from, and press enter to reboot. After this reboot the first thing I got was the following error: error: couldn't read file error: you need to load the kernel first Press any key to continue.. After that it throws me back to the Grub select screen: Ubuntu, with linux 3.0.0-14-generic-pae recovery mode previous linux versions (none since I made a clean install) memory test If i choose the first option it shows only a black screen and never loads anything. If i reboot the same thing happens. Could I repair this using boot-repair? Is there any other way? Note: I know nothing about linux code so i am a total noob on this one Update: boot-repair did not help Grub.cfg here: http://pastebin.com/GKLuDuhM Boot Info Script: http://pastebin.com/indARkKJ

    Read the article

  • Value of Itanium or Sparc over x86_64 for Oracle Deployment

    - by Antitribu
    We are looking at a new environment to run our Oracle Database running on SUSE (potentially migrating to RedHat). Our database is approximately 100GB and performs adequately on our current hardware (x86_64) with approximately 6GB of ram allocated to it. We are growing quickly however and will require more performance shortly. Given the cost of Oracle licenses we would like to maximize the value from each license by choosing the most appropriate CPU to run the software on. The questions are: Are there substantial benefits to looking at Itanium or Sparc hardware, are there any drawbacks? Is there a point where one starts to scale out better? What are the long term support options for Itanium? Given the dominance of x86 would it be safer long term to stick with x86? On average what would be the performance benefit of implementing an Oracle database on Itanium or Sparc over x86_64? Is this an issue at all or will other factors (IO/RAM) cap out first? If anyone can point me towards some solid documentation on comparisons between the platforms that provides good case analysis of when to choose which I'm more than happy to accept that as an answer. Edit:- Added Sparc as an Option as it was previously not considered however with the recent Oracle Sun aquisition seems very relevant.

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >