Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 646/1022 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • How to reference a sql server with a slash (\) in its name?

    - by Bill Paetzke
    Givens: One SQL Server is named: DevServerA Another is named: DevServerB\2K5 Problem: From DevServerA, how can I write a query that references DevServerB\2K5? I tried a sample, dummy query (running it from DevServerA): SELECT TOP 1 * FROM DevServerB\2K5.master.sys.tables And I get the error: Msg 102, Level 15, State 1, Line 2 Incorrect syntax near '\.'. However, I know my syntax is almost correct, since the other way around works (running this query from DevServerB\2K5): SELECT TOP 1 * FROM DevServerA.master.sys.tables Please help me figure out how to reference DevServerB\2K5 from DevServerA. Thanks.

    Read the article

  • How to grab data from webpage in Chrome and output into Chrome extension popup?

    - by chimerical
    For a Google Chrome extension, none of the Javascript I write to manipulate the DOM of the extension popup.html seems to have any effect on the popup's DOM. I can manipulate the DOM of the current webpage in the browser just fine by using content_script.js, and I'm interested in grabbing data from the webpage and outputting it into the extension popup, like so (below: popup.html): <div id="extensionpopupcontent">Links</div> <a onclick="click()">Some Link</a> <script type="text/javascript"> function click() { chrome.tabs.executeScript(null, {file: "content_script.js"}); document.getElementById("extensionpopupcontent").innerHTML = variableDefinedInContentScript; window.close(); } </script> I tried using chrome.extension.sendRequest from the documentation at http://code.google.com/chrome/extensions/messaging.html, but I'm not sure how to properly use it in my case, specifically the greeting and the response. contentscript.js ================ chrome.extension.sendRequest({greeting: "hello"}, function(response) { console.log(response.farewell); });

    Read the article

  • Tools for code snippet execution

    - by nzpcmad
    By "code snippet execution", I mean the ability to write a few lines of code, run and test it without having to fire up an IDE and create a dummy project. It's incredibly useful for helping people with a small code sample without creating a project, compiling everything cleanly, sending them the code snippet and deleting the project. I'm not asking about the best code snippets or a snippet editor or where to store snippets! For C#, I use Snippet Compiler. For Java, I use Eclipse Scrapbook. For LINQ, I use LINQPad. Any suggestions for other (better?) tools? e.g. is there one for Java that doesn't involve firing up Eclipse? What about C?

    Read the article

  • Determining which JavaScript/CSS browser features are required

    - by Alan Neal
    My website uses a variety of technologies, such as JQuery, new CSS definitions (e.g., moz-selection, -webkit-user-select), etc. The site works perfectly with Google Chrome and Safari, but has some quirkiness in Firefox, IE, and some of the other browsers. I want to write a script to check for necessary browser features but, with several thousand lines of code and CSS definitions, I'm not certain which features I should be looking for. Is there some sort of online analysis (similar to how JSLint operates) that would tell me which features my script and CSS files need? Are there tools (like FireBug) that provide this info?

    Read the article

  • click li, go to next li

    - by steve
    Can't find a simple solution to this, I know it's easy and I've tried a few things but I can't quite get it to work. I'm currently working with a sidescrolling site and I want every time you click an image (contained in an li) it scrolls to the next li. I have jQuery plugin localscroll so it smoothly goes from one to the next, and that's working. I need to now write a code that triggers jQuery to utilize the localscroll function and go to the next li. Right now I have this, but I know it's not right: $(document).ready(function () { var gallery = $('.wrapper ul li') $(gallery).click(function() { $.localscroll().next(li); }); });

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Doctrine2 use of criteria inside the entity class

    - by Piotr Kowalczuk
    They try to write a method whose task would be to return only selected elements of the collection of items associated with a particular entity. /** * @ORM\OneToMany(targetEntity="PlayerStats", mappedBy="summoner") * @ORM\OrderBy({"player_stat_summary_type" = "ASC"}) */ protected $player_stats; public function getPlayerStatsBySummaryType($summary_type) { if ($this->player_stats->count() != 0) { $criteria = Criteria::create() ->where(Criteria::expr()->eq("player_stat_summary_type", $summary_type)); return $this->player_stats->matching($criteria)->first(); } return null; } but i get error: PHP Fatal error: Cannot access protected property Ranking\CoreBundle\Entity\PlayerStats::$player_stat_summary_type in /Users/piotrkowalczuk/Sites/lolranking/vendor/doctrine/common/lib/Doctrine/Common/Collections/Expr/ClosureExpressionVisitor.php on line 53 any idea how to fix this?

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • Android serialization: ImageView

    - by embo
    I have a simple class: public class Ball2 extends ImageView implements Serializable { public Ball2(Context context) { super(context); } } Serialization ok: private void saveState() throws IOException { ObjectOutputStream oos = new ObjectOutputStream(openFileOutput("data", MODE_PRIVATE)); try { Ball2 data = new Ball2(Game2.this); oos.writeObject(data); oos.flush(); } catch (Exception e) { Log.e("write error", e.getMessage(), e); } finally { oos.close(); } } But deserealization private void loadState() throws IOException { ObjectInputStream ois = new ObjectInputStream(openFileInput("data")); try { Ball2 data = (Ball2) ois.readObject(); } catch (Exception e) { Log.e("read error", e.getMessage(), e); } finally { ois.close(); } } fail with error: 03-24 21:52:43.305: ERROR/read error(1948): java.io.InvalidClassException: android.widget.ImageView; IllegalAccessException How deserialize object correctly?

    Read the article

  • Ajax data two-way data binding strategies?

    - by morgancodes
    I'd like to 1) Draw create form fields and populate them with data from javascript objects 2) Update those backing objects whenever the value of the form field changes Number 1 is easy. I have a few js template systems I've been using that work quite nicely. Number 2 may require a bit of thought. A quick google search on "ajax data binding" turned up a few systems which seem basically one-way. They're designed to update a UI based on backing js objects, but don't seem to address the question of how to update those backing objects when changes are made to the UI. Can anyone recommend any libraries which will do this for me? It's something I can write myself without too much trouble, but if this question has already been thought through, I'd rather not duplicate the work.

    Read the article

  • Hudson interactive mode:- Is there one?

    - by Rupal Desai
    I'm pretty new to hudson build system. I currently have my builds run from combination of perl/cgi scripts, with ability to start from a browser. What I need is an ability in hudson to checkout a file from perforce (can do that), parse that file (i can write a script for this) and based on the result of the parse give the user ability to choose various different options on what to build (compile). Is this possible? I'm not sure if I should tie together couple of different projects to do this or not? Any ideas on this could be achieved would be very helpful.

    Read the article

  • Ruby as a scripting language for web server

    - by Olivier Lalonde
    Is it possible to use Ruby as a scripting language with a HTTP server ? I'd like to be able to simply put some Ruby files in a web directory and be able to execute them from my browser - just like I did with PHP. I have absolutely nothing against frameworks such as RoR, but I was told that I should first learn Ruby and only then move on with higher level frameworks. Of course, I could write some Ruby scripts and run them in the console, but I would prefer getting the input/output from my browser :) Is that possible at all ? Otherwise, how hard would it be for me to build a quick and simple web framework ?

    Read the article

  • Call a macro every time any method is called - Objective C

    - by Jacob Relkin
    Hi, I wrote a debug macro that prints to the console the passed-in string whenever the global kDebug flag == YES. I need to print out the name of a method and it's classname whenever any method is called. That works fine when i painstakingly go through every method and write the name of the class and the method in a string. Is there any special handler that gets called when any method in Objective-C is called, and if so, is there a way i can somehow override it to call my debug macro?? The entire purpose of this is so that I don't have to go through every method in my code and hand-code the method signature in the debug macro call. Thanks

    Read the article

  • Subprocess fails to catch the standard output

    - by user343934
    I am trying to generate tree with fasta file input and Alignment with MuscleCommandline import sys,os, subprocess from Bio import AlignIO from Bio.Align.Applications import MuscleCommandline cline = MuscleCommandline(input="c:\Python26\opuntia.fasta") child= subprocess.Popen(str(cline), stdout = subprocess.PIPE, stderr=subprocess.PIPE, shell=(sys.platform!="win32")) align=AlignIO.read(child.stdout,"fasta") outfile=open('c:\Python26\opuntia.phy','w') AlignIO.write([align],outfile,'phylip') outfile.close() I always encounter with these problems Traceback (most recent call last): File "<string>", line 244, in run_nodebug File "C:\Python26\muscleIO.py", line 11, in <module> align=AlignIO.read(child.stdout,"fasta") File "C:\Python26\Lib\site-packages\Bio\AlignIO\__init__.py", line 423, in read raise ValueError("No records found in handle") ValueError: No records found in handle

    Read the article

  • File not readable exception - pear/Config_Lite

    - by CasperNine
    I have two config files located in: /etc/svnauth and var/www/svnauth I have given read, write access to for both files like shown below chown -R apache:apache /etc/svnauth chmod -R 770 /etc/svnauth chown -R apache:apache /var/www/svnauth chmod -R 770 /var/www/svnauth When I try to read these two files using pear/Config_Lite, /etc/svnauth always fails. I can successfully read the /var/www/svnauth file Any reasons for this? What am I missing here Following is the error message i get: Fatal error: Uncaught exception 'Config_Lite_Exception_Runtime' with message 'file not readable: /etc/svnauth' in /var/www/html/svnmanager/Config/Lite.php:112 Stack trace: #0 /var/www/html/svnmanager/index.php(60): Config_Lite->read('/etc/svnauth') #1 {main} thrown in /var/www/html/svnmanager/Config/Lite.php on line 112

    Read the article

  • Using PIG with Hadoop, how do I regex match parts of text with an unknown number of groups?

    - by lmonson
    I'm using Amazon's elastic map reduce. I have log files that look something like this random text foo="1" more random text foo="2" more text noise foo="1" blah blah blah foo="1" blah blah foo="3" blah blah foo="4" ... How can I write a pig expression to pick out all the numbers in the 'foo' expressions? I prefer tuples that look something like this: (1,2) (1) (1,3,4) I've tried the following: TUPLES = foreach LINES generate FLATTEN(EXTRACT(line,'foo="([0-9]+)"')); But this yields only the first match in each line: (1) (1) (1)

    Read the article

  • How to generate pdf files _with_ utf-8 multibyte characters using Zend Framework

    - by Sejanus
    Hello, I've got a "little" problem with Zend Framework Zend_Pdf class. Multibyte characters are stripped from generated pdf files. E.g. when I write aabccdee it becomes abcd with lithuanian letters stripped. I'm not sure if it's particularly Zend_Pdf problem or php in general. Source text is encoded in utf-8, as well as the php source file which does the job. Thank you in advance for your help ;) P.S. I run Zend Framework v. 1.6 and I use FONT_TIMES_BOLD font. FONT_TIMES_ROMAN does work

    Read the article

  • Looking for a php template Parser with nesting

    - by christian
    Hi Iam looking for a php parser that can do this. {tag} Replace the tag with text comming from a function {tag(params)} It must support params {tag({tag(params)},{tag(params)})} It must support nesting {tag()? else } It must support Tests {$tag=value} It must support varriables Do anyone of you know of an parser that can do this? Or maby you know how i can create one. I have tryed to do this with preg, but it seems impossible to create nesting. Smarty seems to be a bit to big, and i dont know if you can disable all the extra functionality it has. I only need the functionality that i have listet over. In smarty your able to write php code and i dont like that. {php} {/php} So if iam going to use that i need to be able to turn it of. (Iam going to use it with codeigniter.)

    Read the article

  • Writing Facebook apps in C#

    - by Water Cooler v2
    Hi, I am a C# developer and fancy the idea of writing a C# app or two to integrate with the Facebook API. I read from this page: http://wiki.developers.facebook.com/index.php/User:C_Sharp that there's this Microsoft SDK for Facebook Platform that has binary assemblies that I can use to write my C# app. As a start, I want to try out the example mentioned on the above-mentioned page -- one that gets me a friend list. The problem is: I am completely new to this Facebook development thing and I see I am going to need, at the very least, an API Key and some Facebook service Secret key or some such, to begin writing some code. Do I also need a developer account? Where do I get all these things from?

    Read the article

  • Join one row to multiple rows in another table

    - by Ghostrider
    I have a table to entities (lets call them people) and properties (one person can have an arbitrary number of properties). Ex: People Name Age -------- Jane 27 Joe 36 Jim 16 Properties Name Property ----------------- Jane Smart Jane Funny Jane Good-looking Joe Smart Joe Workaholic Jim Funny Jim Young I would like to write an efficient select that would select people based on age and return all or some of their properties. Ex: People older than 26 Name Properties Jane Smart, Funny, Good-looking Joe Smart, Workaholic It's also acceptable to return one of the properties and total property count. The query should be efficient: there are millions of rows in people table, hundreds of thousands of rows in properties table (so most people have no properties). There are hundreds of rows selected at a time. Is there any way to do it?

    Read the article

  • PHP equivalent to Perl format function

    - by Dustin Hansen
    Is there an equivalent to Perl's format function in PHP? I have a client that has an old-ass okidata dotmatrix printer, and need a good way to format receipts and bills with this arcane beast. I remember easily doing this in perl with something like: format BILLFORMAT = Name: @>>>>>>>>>>>>>>>>>>>>>> Age: @### $name, $age . write; Any ideas would be much appreciated, banging my head on the wall with this one. O.o

    Read the article

  • how to profile my code??

    - by kaki
    i want to how to profile my code... i have gone through the docs , but as there were no example codes given i could not get anything from it. i have a large code and it is taking so much time hence want to profile and increase its speed. i havent written my code in method , there are few in between but not completely. i dont have any main in my code..i want to know how to use profiling.. looking for some example or sample code of about how to profile.. i tried psyco i.e just addded two line at the top of my code import psyco psyco.full() is this write,it did not show any improvement. and other way of speeding up ,please suggest. thanks in advance..

    Read the article

  • strange problem with WriteBeginTag

    - by user276640
    i use such code, but it renders with error <li class="dd0"><div id="dt1"<a href="http://localhost:1675/Category/29-books.aspx">Books</a></div></li> there is no > in opening tag div. what the problem? writer.WriteBeginTag("li"); //writer.WriteAttribute("class", this.CssClass); writer.WriteAttribute("class", "dd0"); if (!String.IsNullOrEmpty(this.LiLeftMargin)) { writer.WriteAttribute("style", string.Format("margin-left: {0}px", this.LiLeftMargin)); } writer.Write(HtmlTextWriter.TagRightChar); writer.WriteBeginTag("div"); writer.WriteAttribute("id", "dt1"); this.HyperLink.RenderControl(writer); writer.WriteEndTag("div"); writer.WriteEndTag("li");

    Read the article

  • How to keep material on one double page in latex ?

    - by drasto
    I have two side document(so I use twoside option) in latex. I need to keep some material(text, pictures...) on one double page. In another words I want to allow page break from odd to even page but I want to prohibit breaks from even to odd page. I tried to write macro: \newcommand{\nl}{ \\ \ifodd\c@page \relax \else \nopagebreak \fi} and use it instead of \ (I don't use any other line breaks commands in my document) but it does not work. Thanks for all answers.

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >