Search Results

Search found 4458 results on 179 pages for 'individual improvement'.

Page 138/179 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • How can I use a delimiter in wmic output, separating columns?

    - by Abhishek Simon
    I want to fetch Windows Hotfix listing with some format, whose output can be separated with some delimiter. so far I found a wmic command which gives me a desired output but the problem is the \s delimiter is not going to work here. Is there a way I can place some , or anyother character, which I can later use in java program to get individual columns? Command wmic qfe get caption,csname,description,hotfixid,installedby,installedon Output Caption CSName Description HotFixID InstalledBy InstalledOn http://go.microsoft.com/fwlink/?LinkId=161784 Abhishek Update KB971033 NT AUTHORITY\SYSTEM 3/15/2012 http://support.microsoft.com/?kbid=2032276 Abhishek Security Update KB2032276 NT AUTHORITY\SYSTEM 3/15/2012 .. . Update I am trying for /f "tokens=1,2,3,4,5,6,7,8,9,10,11" %g in ('wmic qfe get caption,csname,description,fixcomments,hotfixid,installdate,installedby,installedon,name,servicepackineffect,status') do @echo %g,%h,%i,%j,%k,%l,%m,%n,%o,%p but it gives me invalid GET Expression C:\Users\Abhishek\Desktop>for /f "tokens=1,2,3,4,5,6,7,8,9,10,11" %g in ('wmic qfe get caption,csname,description,fixcomments,hotfixid,installdate,installedby,installedon,name,servicepackineffect,status') do @echo %g,%h,%i,%j,%k,%l,%m,%n,%o,%p Invalid GET Expression. What is the problem here? This might solve the problem for me . More Update I even tried the below command but this too does not solve space problem Command for /f "tokens=1,2,3,4,5,6,7,8,9,10,11" %g in ('wmic qfe list') do @echo %g,%h,%i,%j,%k,%l,%m,%n,%o,%p Output Caption,CSName,Description,FixComments,HotFixID,InstallDate,InstalledBy,InstalledOn,Name,ServicePackInEffect http://go.microsoft.com/fwlink/?LinkId=161784,Abhishek,Update,KB971033,NT,AUTHOR,,Y\SYSTEM,3/15/2012, http://support.microsoft.com/?kbid=2281679,Abhishek,Security,Update,KB2281679,NT,AUTHORITY\SYSTEM,3/15/2012, http://support.microsoft.com/?kbid=2284742,Abhishek,Update,KB2284742,NT,AUTHORIT,,SYSTEM,3/15/2012, http://support.microsoft.com/?kbid=2286198,Abhishek,Security,Update,KB2286198,NT,AUTHORITY\SYSTEM,3/15/2012,

    Read the article

  • iPhone Developer Program registration for UK trading partnership

    - by CMLloyd
    I have been looking into this for a long time and have found no definitive answer. I can't be the only person to have faced this problem and am wondering how you guys proceeded in similar cases. I'm part of a partnership, based in the UK, trading as, lets say, "ABCD iPhone Apps" (legally, a perfectly legitimate way of doing business). I've now developed an iPhone App and I want our company name ("ABCD iPhone Apps") to appear as the seller in the App Store. This way, any future Apps that we develop can all get released under the "ABCD iPhone Apps" aegis too. Given that we aren't an incorporated company (and probably never will be), is it possible for us to enroll in the iPhone Developer Program as a company? Or is there another solution? (Note: I do also have an Individual account but that is for personal projects and is in no way connected to the partnership, and shall remain that way) EDIT: I've just spoken to a guy at ADC UK and he tells me there is no other solution. For a company to be approved on the Developer Program, Apple needs to see a copy of the company's Certificate of Incorporation during the registration process, otherwise no approval.

    Read the article

  • How can I encrypt CoreData contents on an iPhone

    - by James A. Rosen
    I have some information I'd like to store statically encrypted on an iPhone application. I'm new to iPhone development, some I'm not terribly familiar with CoreData and how it integrates with the views. I have the data as JSON, though I can easily put it into a SQLITE3 database or any other backing data format. I'll take whatever is easiest (a) to encrypt and (b) to integrate with the iPhone view layer. The user will need to enter the password to decrypt the data each time the app is launched. The purpose of the encryption is to keep the data from being accessible if the user loses the phone. For speed reasons, I would prefer to encrypt and decrypt the entire file at once rather than encrypting each individual field in each row of the database. Note: this isn't the same idea as Question 929744, in which the purpose is to keep the user from messing with or seeing the data. The data should be perfectly transparent when in use. Also note: I'm willing to use SQLCipher to store the data, but would prefer to use things that already exist on the iPhone/CoreData framework rather than go through the lengthy build/integration process involved.

    Read the article

  • Ant target generate empty suite xml file

    - by user200317
    I am using ant for my project and I have been trying to generate JUnit report using ant target. The problem I run in to is that at the end of the execution my TESTS-TestSuites.xml is empty. But all the other individual test xml files have data. And due to this my html reports are empty, in the sense results shows "0". Here is my ant target <!-- JUnit Reporting --> <target name="test-report" depends="build-all" description="Generate Test Results as HTML"> <taskdef name="junitreport" classname="org.apache.tools.ant.taskdefs.optional.junit.XMLResultAggregator"/> <junit printsummary="on" haltonfailure="off" haltonerror="off" fork="yes"> <batchtest fork="yes" todir="${test.reports}" filtertrace="on"> <fileset dir="${build.classes}" includes="**/Test*Selenium.class"/> </batchtest> <formatter type="plain" usefile="false"/> <formatter type="xml" usefile="true"/> <classpath> <path refid="classpath"/> <path refid="application"/> </classpath> </junit> <echo message="running JUnit Report" /> <junitreport todir="${test.reports}"> <fileset dir="${test.reports}"> <include name="Test-*.xml" /> </fileset> <report format="frames" todir="${test.reports.html}" /> </junitreport> </target> This is what I get as ant print summary, [junitreport] Processing C:\YukonSelenium\reports\TESTS-TestSuites.xml to C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 [junitreport] Loading stylesheet jar:file:/C:/DevApps/apache-ant-1.7.1/lib/ant junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl [junitreport] Transform time: 859ms [junitreport] Deleting: C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 Here's how junit report looks like http://www.freeimagehosting.net/image.php?43dd69d3b8.jpg Thanks in advance,

    Read the article

  • Oracle XMLDB's XMLCAST and XMLQUERY incompatible with iBatis?

    - by tthong
    I've been trying to select a list of values from XMLs stored in an XMLType column but I keep getting the errors which are listed at the tail end of this post. The select id is getXMLFragment , and the relevant subset of the sqlmap.xml is as follows: <select id="getXMLFragment" resultClass="list"> SELECT XMLCAST(XMLQUERY('$CUSTOMER/CUSTOMER/DETAILS/ CUST_NAME/text()' PASSING CUSTOMER AS "CUSTOMER" RETURNING CONTENT) AS VARCHAR2(20)) AS customers FROM SHOP.CLIENT_INFO </select> (CUSTOMER is an XMLType column in CLIENT_INFO) and I call the statement using List<String> custNames= (List<String>) sqlMap.queryForList("getXMLFragment"); I am using ibatis-2.3.4.726.jar. Is it because iBatis does not recognise XMLDB queries and hence, tokenizes the string wrongly? On a sidenote, I have implemented XMLTypeCallback.java to handle XMLType insertions successfully, and I think it will work should I wish to retrieve the entire XML. However, in this case, I need to extract only individual values due to requirements. A workaround would be greatly appreciated. Thanks in advance. The exceptions generated are listed below: --- The error occurred in sqlMap.xml. --- The error occurred while preparing the mapped statement for execution. --- Check the getXMLFragment. --- Check the SQL statement. --- Cause: java.util.NoSuchElementException at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java: 204) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryForList(MappedStatement.java: 139) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java: 567) at com.ibatis.sqlmap.engine.impl.SqlMapExecutorDelegate.queryForList(SqlMapExecutorDelegate.java: 541) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java: 118) at com.ibatis.sqlmap.engine.impl.SqlMapSessionImpl.queryForList(SqlMapSessionImpl.java: 122) at com.ibatis.sqlmap.engine.impl.SqlMapClientImpl.queryForList(SqlMapClientImpl.java: 98) at Main.main(Main.java:60) Caused by: java.util.NoSuchElementException at java.util.StringTokenizer.nextToken(StringTokenizer.java:332) at com.ibatis.sqlmap.engine.mapping.sql.simple.SimpleDynamicSql.processDynamicElements(SimpleDynamicSql.java: 90) at com.ibatis.sqlmap.engine.mapping.sql.simple.SimpleDynamicSql.getSql(SimpleDynamicSql.java: 45) at com.ibatis.sqlmap.engine.mapping.statement.MappedStatement.executeQueryWithCallback(MappedStatement.java: 184) ... 7 more

    Read the article

  • Excel VBA Text To Column

    - by Pat
    This is what I currently have: H101 John Doe Jane Doe Jack Doe H102 John Smith Jane Smith Katie Smith Jack Smith And here is what I want: H101 John Doe H101 Jane Doe H101 Jack Doe H102 John Smith H102 Jane Smith H102 Katie Smith H102 Jack Smith Obviously I want to do this on a bigger scale. The number of columns is between 1 & 6, so I cant limit it that way. I was able to get a script that allows me to put each individual on one row. However, I am having a hard time getting the first column to copy over to each row. Sub ToOneColumn() Dim i As Long, k As Long, j As Integer Application.ScreenUpdating = False Columns(2).Insert i = 0 k = 1 While Not IsEmpty(Cells(k, 3)) j = 3 While Not IsEmpty(Cells(k, j)) i = i + 1 Cells(i, 1) = Cells(k, 1) //CODE IN QUESTION Cells(i, 2) = Cells(k, j) Cells(k, j).Clear j = j + 1 Wend k = k + 1 Wend Application.ScreenUpdating = True End Sub Like I said, it was working fine to get everyone each on their own row, but can't figure out how to get that first column. It seems like it should be so simple, but it's bugging me. Any help is greatly appreciated.

    Read the article

  • trigger config transformation in TFS 2010 or msbuild

    - by grenade
    I'm attempting to make use of configuration transformations in a continuous integration environment. I need a way to tell the TFS build agent to perform the transformations. I was kind of hoping it would just work after discovering the config transform files (web.qa-release.config, web.production-release.config, etc...). But it doesn't. I have a TFS build definition that builds the right configurations (qa-release, production-release, etc...) and I have some specific .proj files that get built within these definitions and those contain some environment specific parameters eg: <PropertyGroup Condition=" '$(Configuration)'=='production-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">qa.web</TargetHost> ... </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)'=='qa-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">production.web</TargetHost> ... </PropertyGroup> I know from the output that the correct configurations are being built. Now I just need to learn how to trigger the config transformations. Is there some hocus pocus that I can add to the final .proj in the build to kick off the transform and blow away the individual transform files?

    Read the article

  • How to add and remove nested model fields dynamically using Haml and Formtastic

    - by Brightbyte8
    We've all seen the brilliant complex forms railscast where Ryan Bates explains how to dynamically add or remove nested objects within the parent object form using Javascript. Has anyone got any ideas about how these methods need to be modified so as to work with Haml Formtastic? To add some context here's a simplified version of the problem I'm currently facing: # Teacher form (which has nested subject forms) [from my application] - semantic_form_for(@teacher) do |form| - form.inputs do = form.input :first_name = form.input :surname = form.input :city = render 'subject_fields', :form => form = link_to_add_fields "Add Subject", form, :subjects # Individual Subject form partial [from my application] - form.fields_for :subjects do |ff| #subject_field = ff.input :name = ff.input :exam = ff.input :level = ff.hidden_field :_destroy = link_to_remove_fields "Remove Subject", ff # Application Helper (straight from Railscasts) def link_to_remove_fields(name, f) f.hidden_field(:_destroy) + link_to_function(name, "remove_fields(this)") end def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize + "_fields", :f => builder) end link_to_function(name, h("add_fields(this, \"#{association}\", \"#{escape_javascript(fields)} \")")) end #Application.js (straight from Railscasts) function remove_fields(link) { $(link).previous("input[type=hidden]").value = "1"; $(link).up(".fields").hide(); } function add_fields(link, association, content) { var new_id = new Date().getTime(); var regexp = new RegExp("new_" + association, "g") $(link).up().insert({ before: content.replace(regexp, new_id) }); } The problem with implementation seems to be with the javascript methods - the DOM tree of a Formtastic form differs greatly from a regular rails form. I've seen this question asked online a few times but haven't come across an answer yet - now you know that help will be appreciated by more than just me! Jack

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • What are some strategies for maintaining a common database schema with a team of developers and no D

    - by Mahmoud Abdelkader
    I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours is seems reasonable. Thanks!

    Read the article

  • MSTest Test Context Exception Handling

    - by Flip
    Is there a way that I can get to the exception that was handled by the MSTest framework using the TestContext or some other method on a base test class? If an unhandled exception occurs in one of my tests, I'd like to spin through all the items in the exception.Data dictionary and display them to the test result to help me figure out why the test failed (we usually add data to the exception to help us debug in the production env, so I'd like to do the same for testing). Note: I am not testing that an exception was SUPPOSED TO HAPPEN (I have other tests for that), I am testing a valid case, I just need to see the exception data. Here is a code example of what I'm talking about. [TestMethod] public void IsFinanceDeadlineDateValid() { var target = new BusinessObject(); SetupBusinessObject(target); //How can I capture this in the text context so I can display all the data //in the exception in the test result... var expected = 100; try { Assert.AreEqual(expected, target.PerformSomeCalculationThatMayDivideByZero()); } catch (Exception ex) { ex.Data.Add("SomethingImportant", "I want to see this in the test result, as its important"); ex.Data.Add("Expected", expected); throw ex; } } I understand there are issues around why I probably shouldn't have such an encapsulating method, but we also have sub tests to test all the functionality of PerformSomeCalculation... However, if the test fails, 99% of the time, I rerun it passes, so I can't debug anything without this information. I would also like to do this on a GLOBAL level, so that if any test fails, I get the information in the test results, as opposed to doing it for each individual test. Here is the code that would put the exception info in the test results. public void AddDataFromExceptionToResults(Exception ex) { StringBuilder whereAmI = new StringBuilder(); var holdException = ex; while (holdException != null) { Console.WriteLine(whereAmI.ToString() + "--" + holdException.Message); foreach (var item in holdException.Data.Keys) { Console.WriteLine(whereAmI.ToString() + "--Data--" + item + ":" + holdException.Data[item]); } holdException = holdException.InnerException; } }

    Read the article

  • DDD principlers and ASP.NET MVC project design

    - by kaivalya
    Two part questions I have a product aggregate that has; Prices PackagingOptions ProductDescriptions ProductImages etc I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository. Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me.. Second part is managing the edit create operations using ASP.MVC frame work I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?). One challenge I am now facing is; If I edit a specific packaging option of product through mydomain/product/editpackagingoption/10 I have access to the id of the packaging option But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID. That feels very wrong.. The next option I thought of is sending both the product and packaging option IDs on the url like; mydomain/product/editpackagingoption/3/10 But I am not sure if that is a good design either. So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this... I would appreciate if you bear with the long question and help me put this together. thanks!

    Read the article

  • Dynamically loading sub-trees into YUI Treeview

    - by user319399
    When you create a YUI TreeView instance, you can pass in an object that represents an entire tree, and it will automatically build up the TextNodes for you. I'd like to send in a partial tree, such that the tree only goes, say, 2 levels deep, and anything deeper than that will invoke dynamic loading. I've got that much working. Now for the interesting part. In the dynamic loading callback I give to my tree instance, I want to again be able to just give YUI a big object representing more of the tree. I want to do something like this: // data is a array of objects organized into a tree, with some nodes requiring dynamic loading when they are navigated to tree = new YAHOO.widget.TreeView("treeDiv1", data); tree.setDynamicLoad(loadDataForNode); function loadDataForNode(node, onCompleteCallback) { if(node.children.length==0) { var subTree = { "label":"Cars", isLeaf:false, children:[ { "label":"Chevy", isLeaf:true }, { "label":"Ford", isLeaf:true }, ] }; // doesn't work, even though it has the required "label" field var tempNode = new YAHOO.widget.TextNode(subTree, node, true); } onCompleteCallback(); } Is this possible? Or do I have to iterate over all the nodes in my subtree and construct individual TextNodes for each one? Thanks much...

    Read the article

  • OpenOffice with .NET: how to iterate throught all paragraphs and read text

    - by Darius Kucinskas
    How to iterate through all paragraphs in OpenOffice Writer document and output text. I have Java examples, but don't know how to convert code to C#. Java example could be found here: http://wiki.services.openoffice.org/wiki/API/Samples/Java/Writer/TextDocumentStructure My C# code: InitOpenOfficeEnvironment(); XMultiServiceFactory multiServiceFactory = connect(); XComponentLoader componentLoader = XComponentLoader)multiServiceFactory.createInstance("com.sun.star.frame.Desktop"); //set the property PropertyValue[] propertyValue = new PropertyValue[1]; PropertyValue aProperty = new PropertyValue(); aProperty.Name = "Hidden"; aProperty.Value = new uno.Any(false); propertyValue[0] = aProperty; XComponent xComponent = componentLoader.loadComponentFromURL( @"file:///C:/code/test3.doc", "_blank", 0, propertyValue); XEnumerationAccess xEnumerationAccess = (XEnumerationAccess)xComponent; XEnumeration xParagraphEnumeration = xEnumerationAccess.createEnumeration(); while ( xParagraphEnumeration.hasMoreElements() ) { // ??? // The problem is here nextElement() returns uno.Any but // I some how should get XTextContent???? uno.Any textElement = xParagraphEnumeration.nextElement(); // create another enumeration to get all text portions of //the paragraph XEnumeration xParaEnumerationAccess = textElement.createEnumeration(); //step 3 Through the Text portions Enumeration, //get interface to each individual text portion } xComponent.dispose(); xComponent = null;

    Read the article

  • Secure hash and salt for PHP passwords

    - by luiscubal
    It is currently said that MD5 is partially unsafe. Taking this into consideration, I'd like to know which mechanism to use for password protection. Is “double hashing” a password less secure than just hashing it once? Suggests that hashing multiple times may be a good idea. How to implement password protection for individual files? Suggests using salt. I'm using PHP. I want a safe and fast password encryption system. Hashing a password a million times may be safer, but also slower. How to achieve a good balance between speed and safety? Also, I'd prefer the result to have a constant number of characters. The hashing mechanism must be available in PHP It must be safe It can use salt (in this case, are all salts equally good? Is there any way to generate good salts?) Also, should I store two fields in the database(one using MD5 and another one using SHA, for example)? Would it make it safer or unsafer? In case I wasn't clear enough, I want to know which hashing function(s) to use and how to pick a good salt in order to have a safe and fast password protection mechanism. EDIT: The website shouldn't contain anything too sensitive, but still I want it to be secure. EDIT2: Thank you all for your replies, I'm using hash("sha256",$salt.":".$password.":".$id) Questions that didn't help: What's the difference between SHA and MD5 in PHP Simple Password Encryption Secure methods of storing keys, passwords for asp.net How would you implement salted passwords in Tomcat 5.5

    Read the article

  • UX question: is better to have "serious delete" or have "trash"

    - by ftrotter
    I am developing an application that allows for a user to manage some individual data points. One of the things that my users will want to do is "delete" but what should that mean? For a web application is it better to present a user with the option to have serious delete or to use a "trash" system? Under "serious delete" (would love to know if there is a better name for this...) you click "delete" and then the user is warned "this is final and tragic action. Once you do this you will not be able to get -insert data point name here- back, even if you are crying..." Then if they click delete... well it truly is gone forever. Under the "trash" model, you never trust that the user really wants to delete... instead you remove the data point from the "main display" and put into a bucket called "the trash". This gets it out of the users way, which is what they usually want, but they can get it back if they make a mistake. Obviously this is the way most operating systems have gone. The advantages of "serious delete" are: Easy to implement Easy to explain to users The disadvantages of "serious delete" are: it can be tragically final sometimes, cats walk on keyboards The advantages of the "trash" system are: user is safe from themselves bulk methods like "delete a bunch at once" make more sense saves support headaches The disadvantages of the "trash" system are": For sensitive data, you create an illusion of destruction users think something is gone, but it is not. Lots of subtle distinctions make implementation more difficult Do you "eventually" delete the contents of the trash? My question is which one is the right design pattern for modern web applications? With enough discussion to justify your answer... Would love to be pointed towards some relevant research. -FT

    Read the article

  • C# Regex - Replace multiple characters at once without overwriting?

    - by Everaldo Aguiar
    Hello guys, I'm implementing a c# program that should automatize a Mono-alphabetic substitution cipher. The functionality i'm working on at the moment is the simplest one: The user will provide a plain text and a cipher alphabet, for example: Plain text(input): THIS IS A TEST Cipher alphabet: A - Y, H - Z, I - K, S - L, E - J, T - Q Cipher Text(output): QZKL KL QJLQ I thought of using regular expressions since I've been programming in perl for a while, but I'm encountering some problems on c#. First I would like to know if someone would have a suggestion for a regular expression that would replace all occurrence of each letter by its corresponding cipher letter (provided by user) at once and without overwriting anything. Example: In this case, user provides plaintext "TEST", and on his cipher alphabet, he wishes to have all his T's replaced with E's, E's replaced with Y and S replaced with J. My first thought was to substitute each occurrence of a letter with an individual character and then replace that character by the cipherletter corresponding to the plaintext letter provided. Using the same example word "TEST", the steps taken by the program to provide an answer would be: 1 - replace T's with (lets say) @ 2 - replace E's with # 3 - replace S's with & 4 - Replace @ with E, # with Y, & with j 5 - Output = EYJE This solution doesn't seem to work for large texts. I would like to know if anyone can think of a single regular expression that would allow me to replace each letter in a given text by its corresponding letter in a 26-letter cipher alphabet without the need of splitting the task in an intermediate step as I mentioned. If it helps visualize the process, this is a print screen of my GUI for the program: http://img43.imageshack.us/img43/2118/11618743.jpg

    Read the article

  • What is the right way to scale a Flex application up to fullscreen?

    - by Impirator
    Fullscreen mode and I have been battling for a while in this Flex application, and I'm coming up short on Google results to end my woes. I have no problem going into fullscreen mode by doing a Application.application.stage.displayState = StageDisplayState.FULL_SCREEN;, but the rest of the content just sits there in the top, left corner at it's original size. All right, says I, I'll just do a stage.scaleMode = StageScaleMode.SHOW_ALL and make it figure out how to pull this off. And it looks like it does. Except that when you mouse over the individual checkboxes and buttons and various components, they all fidget slightly. Just a slight jump up or down as they resize...on mouse over. Well, this is frustrating, but bearable. I can always just invoke invalidateSize() explicitly for all of them. But for the comboboxes. The ones at the bottom have their menus go off the bottom of the screen, and when I pop out of fullscreen mode, their drop downs cut off half way. I have no idea how to fix that. Can someone step in here, and put me out of my misery? What is the right way to scale a Flex application up to fullscreen? var button:Button = button_fullscreen; try { if(stage.displayState == StageDisplayState.FULL_SCREEN) { Application.application.stage.displayState = StageDisplayState.NORMAL; button.label = "View Fullscreen Mode"; stage.scaleMode = StageScaleMode.NO_SCALE; } else { Application.application.stage.displayState = StageDisplayState.FULL_SCREEN; button.label = "Exit Fullscreen Mode"; stage.scaleMode = StageScaleMode.SHOW_ALL; } invalidateSizes(); // Calls invalidateSize() explicitly on several components. } catch(error:SecurityError) { Alert.show("The security settings of your computer prevent this from being displayed in fullscreen.","Error: "+error.name+" #"+error.errorID); } catch(error:Error) { Alert.show(error.message,error.name+" #"+error.errorID); }

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • SQL Server 2008 Running trigger after Insert, Update locks original table

    - by Polity
    Hi Folks, I have a serious performance problem. I have a database with (related to this problem), 2 tables. 1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word. The validity of the data in the second table is of less important then the validity of the data in the first table. Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables). I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds. On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time). The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways). I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed. What do you think, if i'm right, is there a easy way around this? Thanks in advance! Cheers, Koen

    Read the article

  • Using cellUpdateEvent with YUI DataTable and JSON DataSource

    - by Rob Hruska
    I'm working with a UI that has a (YUI2) JSON DataSource that's being used to populate a DataTable. What I would like to do is, when a value in the table gets updated, perform a simple animation on the cell whose value changed. Here are some relevant snippets of code: var columns = [ {key: 'foo'}, {key: 'bar'}, {key: 'baz'} ]; var dataSource = new YAHOO.util.DataSource('/someUrl'); dataSource.responseType = YAHOO.util.DataSource.TYPE_JSON; dataSource.connXhrMode = 'queueRequests'; dataSource.responseSchema = { resultsList: 'results', fields: [ {key: 'foo'}, {key: 'bar'}, {key: 'baz'} ] }; var dataTable = new YAHOO.widget.DataTable('container', columns, dataSource); var callback = function() { success: dataTable.onDataReturnReplaceRows, failure: function() { // error handling code }, scope: dataTable }; dataSource.setInterval(1000, null, callback); And here's what I'd like to do with it: dataTable.subscribe('cellUpdateEvent', function(record, column, oldData) { var td = dataTable.getTdEl({record: record, column: column}); YAHOO.util.Dom.setStyle(td, 'backgroundColor', '#ffff00'); var animation = new YAHOO.util.ColorAnim(td, { backgroundColor: { to: '#ffffff'; } }); animation.animate(); }; However, it doesn't seem like using cellUpdateEvent works. Does a cell that's updated as a result of the setInterval callback even fire a cellUpdateEvent? It may be that I don't fully understand what's going on under the hood with DataTable. Perhaps the whole table is being redrawn every time the data is queried, so it doesn't know or care about changes to individual cells?. Is the solution to write my own specific function to replace onDataReturnReplaceRows? Could someone enlighten me on how I might go about accomplishing this? Edit: After digging through datatable-debug.js, it looks like onDataReturnReplaceRows won't fire the cellUpdateEvent. It calls reset() on the RecordSet that's backing the DataTable, which deletes all of the rows; it then re-populates the table with fresh data. I tried changing it to use onDataReturnUpdateRows, but that doesn't seem to work either.

    Read the article

  • Windows Azure DownloadtoStream EOF error

    - by david
    Hello, I've been using Windows Azure to create a document management system, and things have gone well so far. I've been able to upload and download files to the BLOB storage through an asp.net front end. What I'm attempting to do now is allow users to upload a .zip file, and then take the files out of that .zip and save them as individual files. Problem is, I'm getting "ZipException was unhandled" "EOF in header" and I don't know why. I'm using the ICSharpCode.SharpZipLib library which I've used for many other tasks and its worked great. Here's the basic code: CloudBlob ZipFile = container.GetBlobReference(blobURI); MemoryStream MemStream = new MemoryStream(); ZipFile.DownloadToStream(MemStream); .... while ((theEntry = zipInput.GetNextEntry()) != null) and it's on the line that starts with while that I get the error. I added a sleep duration of 10 seconds just to make sure enough time had gone by. MemStream has a length if I debug it, but the zipInput does sometimes, but not always. It always fails. Thanks for any help. Dave

    Read the article

  • Cache efficient code

    - by goldenmean
    This could sound a subjective question, but what i am looking for is specific instances which you would have encountered related to this. 1) How to make a code, cache effective-cache friendly? (More cache hits, as less cahce misses as possible). from both perspectives, data cache & program cache(instruction cache). i.e. What all things in one's code, related to data structures, code constructs one should take care of to make it cache effective. 2) Are there any particular data structures one must use, must avoid,or particular way of accessing the memers of that structure etc.. to make code cache effective. 3) Are there any program constructs(if, for, switch, break, goto,...), code-flow(for inside a if, if inside a for, etc...) one should follow/avoid in this matter? I am looking forward to hear individual experiences related to making a cache efficient code in general. It can be any programming language(C,C++,ASsembly,...), any hardware target(ARM,Intel,PowerPC,...), any OS(Windows,Linux,Symbian,...) etc.. More the variety, it will help better to understand it deeply.

    Read the article

  • Silverlight 4 Drag and Drop Treeview

    - by Rich
    Does anyone have an example for any of the following scenarios. Given, these are all dynamically populated trees. Not using a Heirarchal data template, but by iterating through object collections manually and appending children at the appropriate level. Treeview1 has 3 levels, but items can only be reordered within their level. So, lets say we have Drives, Folders and Files. Drives can be rearranged in an order, but not put into a Folder. When navigated down one level in a drive, the individual folders can be reordered, but not dragged between drives.. and same with files, only can be reordered, but not moved to a different folder or drive I have 2 treeviews, Treeview1 is the same as #1 above and Treeview2 is like a picklist of available items. A user can drag an item from Treeview2 to Treeview1, but it can only be placed at Treeview1's File Level. The dragged item cannot be a child of a file, or placed at the folder level, nor placed at the drive level. Also, how to handle the Above, On Top, or Below an item. I have yet to come across these examples.

    Read the article

  • drupal_add_css not working

    - by hfidgen
    Hiya, I need to use drupal_add_css to call stylesheets onto single D6 pages. I don't want to edit the main theme stylesheet as there will be a set of individual pages which all need completely new styles - the main sheet would be massive if i put it all in there. My solution was to edit the page in PHP editor mode and do this: <?php drupal_add_css("/styles/file1.css", "theme"); ?> <div id="newPageContent">stuff here in html</div> But when i view source, there is nothing there! Not even a broken css link or anything, it's just refusing to add the css sheet to the css package put into the page head. Variations don't seem to work either: drupal_add_css($path = '/styles/file1.css', $type = 'module', $media = 'all', $preprocess = TRUE) My template header looks like this, i've not changed anything from the default other than adding a custom JS. <head> <?php print $head ?> <title><?php print $head_title ?></title> <?php print $styles ?> <?php print $scripts ?> <script type="text/javascript" src="<?php print base_path() ?>misc/askme.js"></script> <!--[if lt IE 7]> <?php print phptemplate_get_ie_styles(); ?> <![endif]--> </head> Can anyone think of a reason why this function is not working? Thanks!

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >