Search Results

Search found 137 results on 6 pages for 'andres jaan tack'.

Page 5/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Remote reboot of windows to knoppix

    - by user64452
    I am attempting to develop an Auditing application. This audit application will be employed on windows networks. The Audit will need to discover Hardware and software details of all machines attached to the network (including Printers) I do not want to have to install this application on each workstation. The audit app. needs to discover all the ip addresses of all the networked workstations. I have been prototyping this app for the last couple of months and have decided to try a new tack Is this possible? a). You have a windows network, min Windows XP sp3 and upwards b). Maximum of 100 Networked machines (if that matters) c). I need to remotely reboot each WINDOWS machine in turn on the entire network and get it to startup using UNIX, say knoppix for example! d). however the knoppix live cd is only available from one of the networked machines Questions... Morphology? Longevity? Incept dates? Cheers DD

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Thoughts on C# Extension Methods

    - by Damon
    I'm not a huge fan of extension methods.  When they first came out, I remember seeing a method on an object that was fairly useful, but when I went to use it another piece of code that method wasn't available.  Turns out it was an extension method and I hadn't included the appropriate assembly and imports statement in my code to use it.  I remember being a bit confused at first about how the heck that could happen (hey, extension methods were new, cut me some slack) and it took a bit of time to track down exactly what it was that I needed to include to get that method back.  I just imagined a new developer trying to figure out why a method was missing and fruitlessly searching on MSDN for a method that didn't exist and it just didn't sit well with me. I am of the opinion that if you have an object, then you shouldn't have to include additional assemblies to get additional instance level methods out of that object.  That opinion applies to namespaces as well - I do not like it when the contents of a namespace are split out into multiple assemblies.  I prefer to have static utility classes instead of extension methods to keep things nicely packaged into a cohesive unit.  It also makes it abundantly clear where utility methods are used in code.  I will concede, however, that it can make code a bit more verbose and lengthy.  There is always a trade-off. Some people harp on extension methods because it breaks the tenants of object oriented development and allows you to add methods to sealed classes.  Whatever.  Extension methods are just utility methods that you can tack onto an object after the fact.  Extension methods do not give you any more access to an object than the developer of that object allows, so I say that those who cry OO foul on extension methods really don't have much of an argument on which to stand.  In fact, I have to concede that my dislike of them is really more about style than anything of great substance. One interesting thing that I found regarding extension methods is that you can call them on null objects. Take a look at this extension method: namespace ExtensionMethods {   public static class StringUtility   {     public static int WordCount(this string str)     {       if(str == null) return 0;       return str.Split(new char[] { ' ', '.', '?' },         StringSplitOptions.RemoveEmptyEntries).Length;     }   }   } Notice that the extension method checks to see if the incoming string parameter is null.  I was worried that the runtime would perform a check on the object instance to make sure it was not null before calling an extension method, but that is apparently not the case.  So, if you call the following code it runs just fine. string s = null; int words = s.WordCount(); I am a big fan of things working, but this seems to go against everything I've come to know about instance level methods.  However, an extension method is really a static method masquerading as an instance-level method, so I suppose it would be far more frustrating if it failed since there is really no reason it shouldn't succeed. Although I'm not a fan of extension methods, I will say that if you ever find yourself at an impasse with a die-hard fan of either the utility class or extension method approach, then there is a common ground.  Extension methods are defined in static classes, and you call them from those static classes as well as directly from the objects they extend.  So if you build your utility classes using extension methods, then you can have it your way and they can have it theirs. 

    Read the article

  • Correct way to import Blueprint's ie.css via DotLess in a Spark view

    - by Chris F
    I am using the Spark View Engine for ASP.NET MVC2 and trying to use Blueprint CSS. The quick guide to Blueprint says to add links to the css files like so: <link rel="stylesheet" href="blueprint/screen.css" type="text/css" media="screen, projection"> <link rel="stylesheet" href="blueprint/print.css" type="text/css" media="print"> <!--[if lt IE 8]><link rel="stylesheet" href="blueprint/ie.css" type="text/css" media="screen, projection"><![endif]--> But I'm using DotLess and wish to simplify Blueprint as suggested here. So I'm doing this in my site.less (which gets compiled to site.min.css by Chirpy): @import "screen.css"; #header { #title { .span-10; .column; } } ... Now my site can just reference site.min.css and it includes blueprint's screen.css, which includes my reset. I can also tack on an @import "print.css" after my @import "screen.css" if desired. But now, I'm trying to figure out the best way to bring in the ie.css file to have Blueprint render correctly in IE6 & IE7. In my Spark setup, I have a partial called _Styles.spark that is brought into the Application.spark and is passed a view model that includes the filenames for all stylesheets to include (and an HtmlExtension to get the full path) and they're added using an "each" iterator. <link each="var styleSheet in Model.Styles" href="${Html.Stylesheet(styleSheet)}" rel="stylesheet" type="text/css" media="all"/> Should I simply put this below the above line in my _Styles.spark file? <!--[if lt IE 8]><link rel="stylesheet" href="${Html.Stylesheet("ie.css")}" type="text/css" media="screen, projection"><![endif]--> Will Spark even process it because it's surrounded by a comment?

    Read the article

  • WebSVN with VisualSVN Server, anyone gotten authentication to work?

    - by Lasse V. Karlsen
    I have a VisualSVN Server installed on a Windows server, serving several repositories. Since the web-viewer built into VisualSVN server is a minimalistic subversion browser, I'd like to install WebSVN on top of my repositories. The problem, however, is that I can't seem to get authentication to work. Ideally I'd like my current repository authentication as specified in VisualSVN to work with WebSVN, so that though I see all the repository names in WebSVN, I can't actually browse into them without the right credentials. By visiting the cached copy of the topmost link on this google query you can see what I've found so far that looks promising. (the main blog page seems to have been destroyed, domain of the topmost page I'm referring to is the-wizzard.de) There I found some php functions I could tack onto one of the php files in WebSVN. I followed the modifications there, but all I succeeded in doing was make WebSVN ask me for a username and password and no matter what I input, it won't let me in. Unfortunately, php and apache is largely black magic to me. So, has anyone successfully integrated WebSVN with VisualSVN hosted repositories?

    Read the article

  • Is it possible to have a SeekBar's thumb image extend outside the bar?

    - by Poko
    I have set negative paddings on my custom seekbar so that the round thumb image can go outside the bar, but the thumb isn't rendered out there, is there anyway to force the thumb to be drawn outside those bounds? Sorry guys, I'm new to Android development, and have been tasked with fixing an existing application. The problem is that we have a custom rounded looking track bar, which consists of two rounded 'end cap' images and a 1 px background that is tiled to create the seekbar. As far as I can tell there was never one image that could be set as the background of a normal SeekBar, which is why a custom one was created. The thumb is a circle and needs to 'fit' into the end caps - the three pieces of the bar are in a relative layout. Right now I'm kind of unclear as to how the 1 px background png gets stretched as the seekbar bg, otherwise I would try to tack on the two endcaps onto that drawable some how ... ? Please let me know if this was unclear and I'll try to post any followup info. Thanks in advance for any advice!! Oh, I'm using Android 2.1 if that's relevant to anyone's interests :)

    Read the article

  • shell scripting: search/replace & check file exist

    - by johndashen
    I have a perl script (or any executable) E which will take a file foo.xml and write a file foo.txt. I use a Beowulf cluster to run E for a large number of XML files, but I'd like to write a simple job server script in shell (bash) which doesn't overwrite existing txt files. I'm currently doing something like #!/bin/sh PATTERN="[A-Z]*0[1-2][a-j]"; # this matches foo in all cases todo=`ls *.xml | grep $PATTERN -o`; isdone=`ls *.txt | grep $PATTERN -o`; whatsleft=todo - isdone; # what's the unix magic? #tack on the .xml prefix with sed or something #and then call the job server; jobserve E "$whatsleft"; and then I don't know how to get the difference between $todo and $isdone. I'd prefer using sort/uniq to something like a for loop with grep inside, but I'm not sure how to do it (pipes? temporary files?) As a bonus question, is there a way to do lookahead search in bash grep? To clarify: so the simplest way to do what i'm asking is (in pseudocode) for i in `/bin/ls *.xml` do replace xml suffix with txt if [that file exists] add to whatsleft list end done

    Read the article

  • Self-describing file format for gigapixel images?

    - by Adam Goode
    In medical imaging, there appears to be two ways of storing huge gigapixel images: Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format. Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later. In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata. Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata? The main issues are: Storing images in a way that allows for rapid random access (tiling) Storing downsampled images for rapid zooming (pyramid) Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image) Storing important metadata, including associated images like a slide's label and thumbnail Support for lossy storage What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.

    Read the article

  • t-sql most efficient row to column? for xml path, pivot

    - by ajberry
    create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 -- select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema above, but concept is similar) in both fixed width and delimited formats. The above FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. I've looked at pivot but most of the examples I have found are aggregating information. I just to combine the child rows and tack them onto the parent. For example, for an order it would need to output: OrderId,Product1,Product2,Product3,etc Thoughts or suggestions? I am using SQL Server 2k5.

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • BitShifting with BigIntegers in Java

    - by ThePinkPoo
    I am implementing DES Encryption in Java with use of BigIntegers. I am left shifting binary keys with Java BigIntegers by doing the BigInteger.leftShift(int n) method. Key of N (Kn) is dependent on the result of the shift of Kn-1. The problem I am getting is that I am printing out the results after each key is generated and the shifting is not the expected out put. The key is split in 2 Cn and Dn (left and right respectively). I am specifically attempting this: "To do a left shift, move each bit one place to the left, except for the first bit, which is cycled to the end of the block. " It seems to tack on O's on the end depending on the shift. Not sure how to go about correcting this. Results: c0: 11110101010100110011000011110 d0: 11110001111001100110101010100 c1: 111101010101001100110000111100 d1: 111100011110011001101010101000 c2: 11110101010100110011000011110000 d2: 11110001111001100110101010100000 c3: 1111010101010011001100001111000000 d3: 1111000111100110011010101010000000 c4: 111101010101001100110000111100000000 d4: 111100011110011001101010101000000000 c5: 11110101010100110011000011110000000000 d5: 11110001111001100110101010100000000000 c6: 1111010101010011001100001111000000000000 d6: 1111000111100110011010101010000000000000 c7: 111101010101001100110000111100000000000000 d7: 111100011110011001101010101000000000000000 c8: 1111010101010011001100001111000000000000000 d8: 1111000111100110011010101010000000000000000 c9: 111101010101001100110000111100000000000000000 d9: 111100011110011001101010101000000000000000000 c10: 11110101010100110011000011110000000000000000000 d10: 11110001111001100110101010100000000000000000000 c11: 1111010101010011001100001111000000000000000000000 d11: 1111000111100110011010101010000000000000000000000 c12: 111101010101001100110000111100000000000000000000000 d12: 111100011110011001101010101000000000000000000000000 c13: 11110101010100110011000011110000000000000000000000000 d13: 11110001111001100110101010100000000000000000000000000 c14: 1111010101010011001100001111000000000000000000000000000 d14: 1111000111100110011010101010000000000000000000000000000 c15: 11110101010100110011000011110000000000000000000000000000 d15: 11110001111001100110101010100000000000000000000000000000

    Read the article

  • Are functional programming languages good for practical tasks?

    - by Clueless
    It seems to me from my experimenting with Haskell, Erlang and Scheme that functional programming languages are a fantastic way to answer scientific questions. For example, taking a small set of data and performing some extensive analysis on it to return a significant answer. It's great for working through some tough Project Euler questions or trying out the Google Code Jam in an original way. At the same time it seems that by their very nature, they are more suited to finding analytical solutions than actually performing practical tasks. I noticed this most strongly in Haskell, where everything is evaluated lazily and your whole program boils down to one giant analytical solution for some given data that you either hard-code into the program or tack on messily through Haskell's limited IO capabilities. Basically, the tasks I would call 'practical' such as Aceept a request, find and process requested data, and return it formatted as needed seem to translate much more directly into procedural languages. The most luck I have had finding a functional language that works like this is Factor, which I would liken to a reverse-polish-notation version of Python. So I am just curious whether I have missed something in these languages or I am just way off the ball in how I ask this question. Does anyone have examples of functional languages that are great at performing practical tasks or practical tasks that are best performed by functional languages?

    Read the article

  • How do I reject if exists? for non-nested attributes?

    - by GoodGets
    Currently my controller lets a user submit muliple "links" at a time. It collects them into an array, creates them for that user, but catches any errors for the User to go back and fix. How can I ignore the creation of any links that already exist for that user? I know that I can use validates_uniqueness_of with a scope for that user, but I'd rather just ignore their creation completely. Here's my controller: @links = params[:links].values.collect{ |link| current_user.links.create(link) }.reject { |p| p.errors.empty? } Each link has a url, so I thought about checking if that link.url already exists for that user, but wasn't really sure how, or where, to do that. Should I tack this onto my controller somehow? Or should it be a new method in the model, like as in a before_validation Callback? (Note: these "links" are not nested, but they do belong_to :user.) So, I'd like to just be able to ignore the creation of these links if possible. Like if a user submits 5 links, but 2 of them already exist for him, then I'd just like for those 2 to be ignored, while the other 3 are created. How should I go about doing this?

    Read the article

  • Learning libraries without books or tutorials

    - by Kawili-wili
    While many ask questions about where to find good books or tutorials, I'd like to take the opposite tack. I consider myself to be an entry-level programmer ready to move up to mid-level. I have written code in c, c++, c#, perl, python, clojure, vb, and java, so I'm not completely clueless. Where I see a problem in moving to the next level is learning to make better use of the literally hundreds upon hundreds of libraries available out there. I seem paralyzed unless there is a specific example in a book or tutorial to hand-hold me, yet I often read in various forums where another programmer attempts to assist with a question. He/she will look through the docs or scan the available classes/methods in their favorite IDE and seem to grok what's going on in a relatively short period of time, even if they had no previous experience with that specific library or function. I yearn to break the umbilical chord of constantly spending hour upon hour searching and reading, searching and reading, searching and reading. Many times there is no book or tutorial, or if there is, the discussion glosses over my specific needs or the examples shown are too far off the path for the usage I had in mind or the information is outdated and makes use of deprecated components or the library itself has fallen out of mainstream, yet is still perfectly usable (but no docs, books, or tutorials to hand-hold). My question is: In the absence of books or tutorials, what is the best way to grok new or unfamiliar libraries? I yearn to slicken the grok path so I can get down to the business of doing what I love most -- coding.

    Read the article

  • Haskell Cons Operator (:)

    - by Carson Myers
    I am really new to Haskell (Actually I saw "Real World Haskell" from O'Reilly and thought "hmm, I think I'll learn functional programming" yesterday) and I am wondering: I can use the construct operator to add an item to the beginning of a list: 1 : [2,3] [1,2,3] I tried making an example data type I found in the book and then playing with it: --in a file data BillingInfo = CreditCard Int String String | CashOnDelivery | Invoice Int deriving (Show) --in ghci $ let order_list = [Invoice 2345] $ order_list [Invoice 2345] $ let order_list = CashOnDelivery : order_list $ order_list [CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, CashOnDelivery, ...- etc... it just repeats forever, is this because it uses lazy evaluation? -- EDIT -- okay, so it is being pounded into my head that let order_list = CashOnDelivery:order_list doesn't add CashOnDelivery to the original order_list and then set the result to order_list, but instead is recursive and creates an infinite list, forever adding CashOnDelivery to the beginning of itself. Of course now I remember that Haskell is a functional language and I can't change the value of the original order_list, so what should I do for a simple "tack this on to the end (or beginning, whatever) of this list?" Make a function which takes a list and BillingInfo as arguments, and then return a list? -- EDIT 2 -- well, based on all the answers I'm getting and the lack of being able to pass an object by reference and mutate variables (such as I'm used to)... I think that I have just asked this question prematurely and that I really need to delve further into the functional paradigm before I can expect to really understand the answers to my questions... I guess what i was looking for was how to write a function or something, taking a list and an item, and returning a list under the same name so the function could be called more than once, without changing the name every time (as if it was actually a program which would add actual orders to an order list, and the user wouldn't have to think of a new name for the list each time, but rather append an item to the same list).

    Read the article

  • In App Purchase Unique Identifying Data

    - by dageshi
    O.K so I'm writing a iPhone travel guide, you purchase a subscription to a travel guide for 3 months, it downloads a fairly hefty database and for 3 months that database gets updated weekly with new stuff. Now what I'd like to do is make the user enter their email address as a one off action before they purchase their first guide, for China say. The purpose for doing this is 1) To allow me to contact the user by email when they add a note/tip for a particular place (the app will allow them to send notes & information to me) 2) To Uniquely identify who has purchased the subscription so that if they wipe their device and reinstall the app they can plug the email address in and pickup their subscriptions again. Or so they can use the same subscription on another device they own. My concerns are 1) Will Apple allow the email method of restoring functionality to a second or restored device? 2) As long as I tell the user what I'm using their email address for (aka I won't sell it to anyone else and use it for X purposes) will it be o.k to ask for said email address? And as a side note, can I tack the devices unique id onto my server comms to track devices or is apple going to through a hissy fit about that as well?

    Read the article

  • Create a new webpage using a WCF service

    - by sweeney
    Hello, I'd like to create WCF operation contract which consumes a string, produces a public webpage and then returns the address of this new page. [ServiceContract(Namespace = "")] public class DatabaseService { [OperationContract] public string BuildPage(string fileName, string html) { //writes html to file //returns public file url } } This doesn't seem like it should be complicated but i cant figure out how to do it. So far what i've tried is this: [OperationContract] public string PrintToFile(string name, string text) { FileInfo f = new FileInfo(name); StreamWriter w = f.AppendText(); w.Write(text); w.Close(); return f.Directory.ToString(); } Here's the problem. This does not create a file in the web root, it creates it in the directory where the webdav server is running. When i run this on an IIS server it seems to do nothing at all (at least not that i can tell). How can I get a handle to the webroot programmatically so that i can place the resultant file there and then return the public URL? I can tack on the domain name after the fact without issue so if it only returns the relative path to the file that's fine. Thanks, brian

    Read the article

  • What rules govern the copying of variables in Javascript closures?

    - by int3
    I'd just like to check my understanding of variable copying in Javascript. From what I gather, variables are passed/assigned by reference unless you explicitly tell them to create a copy with the new operator. But I'm a little uncertain when it comes to using closures. Say I have the following code: var myArray = [1, 5, 10, 15, 20]; var fnlist = []; for (var i in myArray) { var data = myArray[i]; fnlist.push(function() { var x = data; console.log(x); }); } fnlist[2](); // returns 20 I gather that this is because fnlist[2] only looks up the value of data at the point where it is invoked. So I tried an alternative tack: var myArray = [1, 5, 10, 15, 20]; var fnlist = []; for (var i in myArray) { var data = myArray[i]; fnlist.push(function() { var x = data; return function() { console.log(x); } }()); } fnlist[2](); // returns 10 So now it returns the 'correct' value. Am I right to say that it works because a function resolves all variable references to their 'constant' values when it is invoked? Or is there a better way to explain it? Any explanations / links to explanations regarding this referencing / copying business would be appreciated as well. Thanks!

    Read the article

  • Emailing a fixed document through Outlook

    - by MoominTroll
    I've added functionality to an application that prints out a bunch of information to a FixedDOcument and sends this off to the printer. This works just fine, however the request is that there be an in application function that emails the document using OUtlook and its here that I come unstuck. I'd very much like to just reuse the class that makes the fixed document for printing to generate the text for email, but I'm struggling to do this. I've tried the following... Microsoft.Office.Interop.Outlook.Application oApp = new Microsoft.Office.Interop.Outlook.Application(); MailItem email = (MailItem)(oApp.CreateItem(OlItemType.olMailItem)); email.Recipients.Add("[email protected]"); email.Subject = "Hello"; email.Body = "TEST"; FixedDocument doc = CreateReport(); //make my fixed document //this doesn't work, and the parameters it takes suggest it never will email.Attachments.Add(doc, OlAttachmentType.olByValue, 1, null); email.Send(); I can't help but think I'm on completely the wrong tack here, but I don't really want to have to write a bunch of new text formatting (since email.Body only takes a string) when I've already got the content formatted how I want it. Note that the content is all textual, so I don't really care if it gets sent as an attachment or as text in the emails body. Ideally if its sent as an attachment it won't be saved anywhere permanently. Any pointers?

    Read the article

  • Dynamic Grouping and Columns

    - by Tim Dexter
    Some good collaboration between myself and Kan Nishida (Oracle BIP Consulting) over at bipconsulting on a question that came in yesterday to an internal mailing list. Is there a way to allow columns to be place into a template dynamically? This would be similar to the Answers Column selector. A customer has said Crystal can do this and I am trying to see how BI Pub can do the same. Example: Report has Regions as a dimension in a table, they want the user to select a parameter that will insert either Units or Dollars without having to create multiple templates. Now whether Crystal can actually do it or not is another question, can Publisher? Yes we can! Kan took the first stab. His approach, was to allow to swap out columns in a table in the report. Some quick steps: 1. Create a parameter from BIP server UI 2. Declare the parameter in RTF template You can check this post to see how you can declare the parameter from the server. http://bipconsulting.blogspot.com/2010/02/how-to-pass-user-input-values-to-report.html 3. Use the parameter value to condition if a particular column needs to be displayed or not. You can use <?if@column:.....?> syntax for Column level IF condition. The if@column is covered in user documentation. This would allow a developer to create a report with the parameter or multiple parameters to allow the user to pick a column to be included in the report. I took a slightly different tack, with the mention of the column selector in the Answers report I took that to mean that the user wanted to select more of a dimensional column and then have the report recalculate all its totals and subtotals based on that selected column. This is a little bit more involved and involves some smart XSL and XPATH expressions, but still very doable. The user can select a column as a parameter, that is passed to the template rather than the query. The parameter value that is actually passed is the element name that you want to regroup the data by. Inside the template we then reference that parameter value in our for-each-group loop. That's where we need the trixy XSL/XPATH code to get the regrouping to happen. At this juncture, I need to hat tip to Klaus, for his article on dynamic sorting that he wrote back in 2006. I basically took his sorting code and applied it to the for-each loop. You can follow both of Kan's first two steps above i.e. Create a parameter from BIP server UI - this just needs to be based on a 'list' type list of value with name/value pairs e.g. Department/DEPARTMENT_NAME, Job/JOB_TITLE, etc. The user picks the 'friendly' value and the server passes the element name to the template. Declare the parameter in RTF template - been here before lots of times right? <?param@begin:group1;'"DEPARTMENT_NAME"'?> I have used a default value so that I can test the funtionality inside the template builder (notice the single and double quotes.) Next step is to use the template builder to build a re-grouped report layout. It does not matter if its hard coded right now; we will add in the dynamic piece next. Once you have a functioning template that is re-grouping correctly. Open up the for-each-group field and modify it to use the parameter: <?for-each-group:ROW;./*[name(.) = $group1]?> 'group1' is my grouping parameter, declared above. We need the XPATH expression to find the column in the XML structure we want to group that matches the one passed by the parameter. Its essentially looking through the data tree for a match. We can show the actual grouping value in the report output with a similar XPATH expression <?./*[name(.) = $group1]?> In my example, I took things a little further so that I could have a dynamic label for the parameter value. For instance if I am using MANAGER as the parameter I want to show: Manager: Tim Dexter My XML elements are readable e.g. DEPARTMENT_NAME. Its a simple case of replacing the underscore with a space and then 'initcapping' the result: <?xdoxslt:init_cap(translate($group1,'_',' '))?> With this in place, the user can now select a grouping column in the BIP report viewer and the layout will re-group the data and any calculations based on that column. I built a group above report but you could equally build the group left version to truly mimic the Answers column selector. If you are interested you can get an example report, sample data and layout template here. Of course, you can combine Klaus' dynamic sorting, Kan's conditional column approach and this dynamic grouping to build a real kick ass report for users that will keep them happy for hours..

    Read the article

  • Introducing sp_ssiscatalog (v1.0.0.0)

    - by jamiet
    Regular readers of my blog may know that over the last year I have made available a suite of SQL Server Reporting Services (SSRS) reports that provide visualisations of the data in the SQL Server Integration Services (SSIS) 2012 Catalog. Those reports are available at http://ssisreportingpack.codeplex.com. As I have built these reports and used them myself on a real life project a couple of things have dawned on me: As soon as your SSIS Catalog gets a significant amount of data in it the performance of the reports degrades rapidly. This is hampered by the fact that there are limitations as to the SQL statements that I can embed within a SSRS report. SSIS professionals are data guys at heart and those types of people feel more comfortable in a query environment rather than having to go through the rigmarole of standing up a reporting server (well, I know I do anyway) Hence I have decided to take a different tack with the reporting pack. Taking my lead from Adam Machanic’s sp_whoisactive and Brent Ozar’s sp_blitz I have produced sp_ssiscatalog, a stored procedure that makes it easy to get at the crucial data in the SSIS Catalog. I will spend the rest of this blog explaining exactly what sp_ssiscatalog does and how to use it but if you would rather just download the bits yourself and start to play you can download v1.0.0.0 from DB v1.0.0.0. Usage Scenarios Most Recent Execution I find that the most frequent information that one needs to get from the SSIS Catalog is information pertaining to the most recent execution. Hence if you execute sp_ssiscatalog with no parameters, that is exactly what you will get. EXEC [dbo].[sp_ssiscatalog] This will return up to 5 resultsets: EXECUTION - Summary information about the execution including status, start time & end time EVENTS - All events that occurred during the execution OnError,OnTaskFailed - All events where event_name is either OnError or OnTaskFailed OnWarning - All events where event_name is OnWarning EXECUTABLE_STATS - Duration and execution result of every executable in the execution All 5 resultsets will be displayed if there is any data satisfying that resultset. In other words, if there are no (for example) OnWarning events then the OnWarning resultset will not be displayed. The display of these 5 resultsets can be toggled respectively by these 5 optional parameters (all of which are of type BIT): @exec_execution @exec_events @exec_errors @exec_warnings @exec_executable_stats Any Execution As just explained the default behaviour is to supply data for the most recent execution. If you wish to specify which execution the data should return data for simply supply the execution_id as a parameter: EXEC [dbo].[sp_ssiscatalog] 6 All Executions sp_ssiscatalog can also return information about all executions: EXEC [dbo].[sp_ssiscatalog] @operation_type='execs' The most recent execution will appear at the top. sp_ssiscatalog provides a number of parameters that enable you to filter the resultset: @execs_folder_name @execs_project_name @execs_package_name @execs_executed_as_name @execs_status_desc Some typical usages might be: //Return all failed executions EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_status_desc='failed' //Return all executions for a specified folder EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_folder_name='My folder' //Return all executions of a specified package in a specified project EXEC [dbo].[sp_ssiscatalog] @operation_type='execs',@execs_project_name='My project', @execs_package_name='Pkg.dtsx' Installing sp_ssicatalog Under the covers sp_ssiscatalog actually calls many other stored procedures and functions hence creating it on your server is not simply a case of running a CREATE PROCEDURE script. I maintain the code in an SQL Server Data Tools (SSDT) database project which means that you have two ways of obtaining it. Download the source code You can download the latest (at the time of writing) source code from http://ssisreportingpack.codeplex.com/SourceControl/changeset/view/70192. Hit the download button to download all the source code in a zip file. The contents of that zip file will include an SSDT database project which you can open up in SSDT and publish just like any other SSDT database project. You can publish to a new database or any existing database, even [SSISDB] if you prefer. Download a dacpac Maintaining the code in an SSDT database project means that it can all get packaged up into a dacpac that you can then publish to your SQL Server. That dacpac is available from DB v1.0.0.0: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. Hence, we can use the command-line tool, sqlpackage.exe, instead. Don’t worry if reverting to the command-line sounds a little daunting, I assure you it is not. Simply open a Visual Studio command-prompt and cd to the folder containing the downloaded dacpac: Type: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /action:Publish /TargetDatabaseName:SsisReportingPack /SourceFile:SSISReportingPack.dacpac /Variables:SSISDB=SSISDB /TargetServerName:(local) or the shortened form: "%PROGRAMFILES(x86)%\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe" /a:Publish /tdn:SsisReportingPack /sf:SSISReportingPack.dacpac /v:SSISDB=SSISDB /tsn:(local) remembering to set your server name appropriately (here mine is set to “(local)” ). If everything works successfully you will see this: And you’re done! You’ll have a new database called [SsisReportingPack] which contains sp_ssiscatalog:   Good luck with sp_ssiscatalog. I have been using it extensively on my own projects recently and it has proved to be very useful indeed. Rest-assured however, I will be adding many new capabilities in the future. Feedback is welcome. @Jamiet

    Read the article

  • Wildcards!

    - by Tim Dexter
    Yes, its been a while, Im sorry, mumble, mumble ... no excuses. Well other than its been, as my son would say 'hecka busy.' On a brighter note I see Kan has been posting some cool stuff in my absence, long may he continue! I received a question today asking about using a wildcard in a template, something like: <?if:INVOICE = 'MLP*'?> where * is the wildcard Well that particular try does not work but you can do it without building your own wildcard function. XSL, the underpinning language of the RTF templates, has some useful string functions - you can find them listed here. I used the starts-with function to achieve a simple wildcard scenario but the contains can be used in conjunction with some of the others to build something more sophisticated. Assume I have a a list of friends and the amounts of money they owe me ... Im very generous and my interest rates a pretty competitive :0) <ROWSET> <ROW> <NAME>Andy</NAME> <AMT>100</AMT> </ROW> <ROW> <NAME>Andrew</NAME> <AMT>60</AMT> </ROW> <ROW> <NAME>Aaron</NAME> <AMT>50</AMT> </ROW> <ROW> <NAME>Alice</NAME> <AMT>40</AMT> </ROW> <ROW> <NAME>Bob</NAME> <AMT>10</AMT> </ROW> <ROW> <NAME>Bill</NAME> <AMT>100</AMT> </ROW> Now, listing my friends is easy enough <for-each:ROW> <NAME> <AMT> <end for-each> but lets say I just want to see all my friends beginning with 'A'. To do that I can use an XPATH expression to filter the data and tack it on to the for-each expression. This is more efficient that using an 'if' statement just inside the for-each. <?for-each:ROW[starts-with(NAME,'A')]?> will find me all the A's. The square braces denote the start of the XPATH expression. starts-with is the function Im calling and Im passing the value I want to check i.e. NAME and the string Im looking for. Just substitute in the characters you are looking for. You can of course use the function in a if statement too. <?if:starts-with(NAME,'A')?><?attribute@incontext:color;'red'?><?end if?> Notice I removed the square braces, this will highlight text red if the name begins with an 'A' You can even use the function to do conditional calculations: <?sum (AMT[starts-with(../NAME,'A')])?> Sum only the amounts where the name begins with an 'A' Notice the square braces are back, its a function we want to apply to the AMT field. Also notice that we need to use ../NAME. The AMT and NAME elements are at the same level in the tree, so when we are at the AMT level we need the ../ to go up a level to then come back down to test the NAME value. I have built out the above functions in a sample template here. Huge prizes for the first person to come up with a 'true' wildcard solution i.e. if NAME like '*im*exter* demand cash now!

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • Adding a generic image field onto a ModelForm in django

    - by Prairiedogg
    I have two models, Room and Image. Image is a generic model that can tack onto any other model. I want to give users a form to upload an image when they post information about a room. I've written code that works, but I'm afraid I've done it the hard way, and specifically in a way that violates DRY. Was hoping someone who's a little more familiar with django forms could point out where I've gone wrong. Update: I've tried to clarify why I chose this design in comments to the current answers. To summarize: I didn't simply put an ImageField on the Room model because I wanted more than one image associated with the Room model. I chose a generic Image model because I wanted to add images to several different models. The alternatives I considered were were multiple foreign keys on a single Image class, which seemed messy, or multiple Image classes, which I thought would clutter my schema. I didn't make this clear in my first post, so sorry about that. Seeing as none of the answers so far has addressed how to make this a little more DRY I did come up with my own solution which was to add the upload path as a class attribute on the image model and reference that every time it's needed. # Models class Image(models.Model): content_type = models.ForeignKey(ContentType) object_id = models.PositiveIntegerField() content_object = generic.GenericForeignKey('content_type', 'object_id') image = models.ImageField(_('Image'), height_field='', width_field='', upload_to='uploads/images', max_length=200) class Room(models.Model): name = models.CharField(max_length=50) image_set = generic.GenericRelation('Image') # The form class AddRoomForm(forms.ModelForm): image_1 = forms.ImageField() class Meta: model = Room # The view def handle_uploaded_file(f): # DRY violation, I've already specified the upload path in the image model upload_suffix = join('uploads/images', f.name) upload_path = join(settings.MEDIA_ROOT, upload_suffix) destination = open(upload_path, 'wb+') for chunk in f.chunks(): destination.write(chunk) destination.close() return upload_suffix def add_room(request, apartment_id, form_class=AddRoomForm, template='apartments/add_room.html'): apartment = Apartment.objects.get(id=apartment_id) if request.method == 'POST': form = form_class(request.POST, request.FILES) if form.is_valid(): room = form.save() image_1 = form.cleaned_data['image_1'] # Instead of writing a special function to handle the image, # shouldn't I just be able to pass it straight into Image.objects.create # ...but it doesn't seem to work for some reason, wrong syntax perhaps? upload_path = handle_uploaded_file(image_1) image = Image.objects.create(content_object=room, image=upload_path) return HttpResponseRedirect(room.get_absolute_url()) else: form = form_class() context = {'form': form, } return direct_to_template(request, template, extra_context=context)

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >