Search Results

Search found 332 results on 14 pages for 'baz'.

Page 5/14 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Silverlight: Binding to a LayoutRoot value from within a DataTemplate

    - by Rosarch
    I have a DataTemplate for a ListBox, where I have several controls that bind to an item. I would also like to bind to a value on LayoutRoot.DataContext. I'm unsure of how to do this. <!--LayoutRoot is the root grid where all page content is placed--> <StackPanel x:Name="LayoutRoot" Background="Transparent"> <ListBox ItemsSource="{Binding Items}"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <TextBlock Text="{Binding}" /> <TextBlock Text="{Binding ElementName=LayoutRoot, Path=DataContext.Foo}" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </StackPanel> public partial class MainPage : PhoneApplicationPage { public string Foo { get { return "the moon"; } } private int startIndex = 1; private IList<string> _data = new List<string>() { "foo", "bar", "baz" }; public IList<string> Items { get { return _data; } } // Constructor public MainPage() { InitializeComponent(); LayoutRoot.DataContext = this; } } This doesn't work; only the _data items are displayed. The following binding errors appear in the Debug output: System.Windows.Data Error: BindingExpression path error: 'Foo' property not found on 'foo' 'System.String' (HashCode=1502598398). BindingExpression: Path='DataContext.Foo' DataItem='System.Windows.Controls.Border' (HashCode=78299055); target element is 'System.Windows.Controls.TextBlock' (Name=''); target property is 'Text' (type 'System.String').. System.Windows.Data Error: BindingExpression path error: 'Foo' property not found on 'bar' 'System.String' (HashCode=696029481). BindingExpression: Path='DataContext.Foo' DataItem='System.Windows.Controls.Border' (HashCode=78298703); target element is 'System.Windows.Controls.TextBlock' (Name=''); target property is 'Text' (type 'System.String').. System.Windows.Data Error: BindingExpression path error: 'Foo' property not found on 'baz' 'System.String' (HashCode=696029489). BindingExpression: Path='DataContext.Foo' DataItem='System.Windows.Controls.Border' (HashCode=78298694); target element is 'System.Windows.Controls.TextBlock' (Name=''); target property is 'Text' (type 'System.String').. Do I have a syntax error somewhere? Update I'm aiming for something that looks like this: foo the moon bar the moon baz the moon Instead, all I'm getting is: foo bar baz

    Read the article

  • How to parse json data in jquery ajax success?

    - by samarh.k
    info = {'phone_number': '123456', 'personal_detail': {'foo':foo, 'bar':bar}, 'is_active': 1, 'document_detail': {'baz':baz, 'saz':saz}, 'is_admin': 1, 'email': '[email protected]'} return HttpResponse(simplejson.dumps({'success':'True', 'result':info}), mimetype='application/javascript') if(data["success"] === "True") { alert(data[**here I want to display personal_detail and document_details**]); } How can I do this?

    Read the article

  • Beginner question about getting reference to cin

    - by John C
    I'm having problems wrapping my head around this. I have a function void foo(istream& input) { input = cin; } This fails (I'm assuming because cin isn't supposed to be "copyable". however, this works void foo(istream& input) { istream& baz = cin; } Is there a reason that I can get a reference to cin in baz but I cannot assign it to input? Thanks

    Read the article

  • In Vim, how to swap 2 non adjacent patterns?

    - by ThG
    I have lines of text, all with the same structure, and would like to make a permutation of 2 elements on all lines: 1257654 some text (which may be long) #Foo 1543098 some other text #Barbar 1238769 whatever #Baz 2456874 something else #Quux I want to obtain : #Foo some text (which may be long) 1257654 #Barbar some other text 1543098 #Baz whatever 1238769 #Quux something else 2456874 This is where I am stuck : :%s/\(\d\{7\}\)\(#.\{-}\)/\2\1/ Where did I go wrong ?

    Read the article

  • Writing the tests for FluentPath

    - by Bertrand Le Roy
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that is not an option as nothing in System.IO enables us to plug a mock file system in. As a consequence, we are left with few options. A few people have suggested me to abstract my calls to System.IO away so that I could tell FluentPath – not System.IO – to use a mock instead of the real thing. That in turn is getting a little silly: FluentPath already is a thin abstraction around System.IO, so layering another abstraction between them would double the test surface while bringing little or no value. I would have to test that new abstraction layer, and that would bring us back to square one. Unless I’m missing something, the only option I have here is to bite the bullet and test against the real file system. Of course, the tests that do that can hardly be called unit tests. They are more integration tests as they don’t only test bits of my code. They really test the successful integration of my code with the underlying System.IO. In order to write such tests, the techniques of BDD work particularly well as they enable you to express scenarios in natural language, from which test code is generated. Integration tests are being better expressed as scenarios orchestrating a few basic behaviors, so this is a nice fit. The Orchard team has been successfully using SpecFlow for integration tests for a while and I thought it was pretty cool so that’s what I decided to use. Consider for example the following scenario: Scenario: Change extension Given a clean test directory When I change the extension of bar\notes.txt to foo Then bar\notes.txt should not exist And bar\notes.foo should exist This is human readable and tells you everything you need to know about what you’re testing, but it is also executable code. What happens when SpecFlow compiles this scenario is that it executes a bunch of regular expressions that identify the known Given (set-up phases), When (actions) and Then (result assertions) to identify the code to run, which is then translated into calls into the appropriate methods. Nothing magical. Here is the code generated by SpecFlow: [NUnit.Framework.TestAttribute()] [NUnit.Framework.DescriptionAttribute("Change extension")] public virtual void ChangeExtension() { TechTalk.SpecFlow.ScenarioInfo scenarioInfo = new TechTalk.SpecFlow.ScenarioInfo("Change extension", ((string[])(null))); #line 6 this.ScenarioSetup(scenarioInfo); #line 7 testRunner.Given("a clean test directory"); #line 8 testRunner.When("I change the extension of " + "bar\\notes.txt to foo"); #line 9 testRunner.Then("bar\\notes.txt should not exist"); #line 10 testRunner.And("bar\\notes.foo should exist"); #line hidden testRunner.CollectScenarioErrors();} The #line directives are there to give clues to the debugger, because yes, you can put breakpoints into a scenario: The way you usually write tests with SpecFlow is that you write the scenario first, let it fail, then write the translation of your Given, When and Then into code if they don’t already exist, which results in running but failing tests, and then you write the code to make your tests pass (you implement the scenario). In the case of FluentPath, I built a simple Given method that builds a simple file hierarchy in a temporary directory that all scenarios are going to work with: [Given("a clean test directory")] public void GivenACleanDirectory() { _path = new Path(SystemIO.Path.GetTempPath()) .CreateSubDirectory("FluentPathSpecs") .MakeCurrent(); _path.GetFileSystemEntries() .Delete(true); _path.CreateFile("foo.txt", "This is a text file named foo."); var bar = _path.CreateSubDirectory("bar"); bar.CreateFile("baz.txt", "bar baz") .SetLastWriteTime(DateTime.Now.AddSeconds(-2)); bar.CreateFile("notes.txt", "This is a text file containing notes."); var barbar = bar.CreateSubDirectory("bar"); barbar.CreateFile("deep.txt", "Deep thoughts"); var sub = _path.CreateSubDirectory("sub"); sub.CreateSubDirectory("subsub"); sub.CreateFile("baz.txt", "sub baz") .SetLastWriteTime(DateTime.Now); sub.CreateFile("binary.bin", new byte[] {0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0xFF}); } Then, to implement the scenario that you can read above, I had to write the following When: [When("I change the extension of (.*) to (.*)")] public void WhenIChangeTheExtension( string path, string newExtension) { var oldPath = Path.Current.Combine(path.Split('\\')); oldPath.Move(p => p.ChangeExtension(newExtension)); } As you can see, the When attribute is specifying the regular expression that will enable the SpecFlow engine to recognize what When method to call and also how to map its parameters. For our scenario, “bar\notes.txt” will get mapped to the path parameter, and “foo” to the newExtension parameter. And of course, the code that verifies the assumptions of the scenario: [Then("(.*) should exist")] public void ThenEntryShouldExist(string path) { Assert.IsTrue(_path.Combine(path.Split('\\')).Exists); } [Then("(.*) should not exist")] public void ThenEntryShouldNotExist(string path) { Assert.IsFalse(_path.Combine(path.Split('\\')).Exists); } These steps should be written with reusability in mind. They are building blocks for your scenarios, not implementation of a specific scenario. Think small and fine-grained. In the case of the above steps, I could reuse each of those steps in other scenarios. Those tests are easy to write and easier to read, which means that they also constitute a form of documentation. Oh, and SpecFlow is just one way to do this. Rob wrote a long time ago about this sort of thing (but using a different framework) and I highly recommend this post if I somehow managed to pique your interest: http://blog.wekeroad.com/blog/make-bdd-your-bff-2/ And this screencast (Rob always makes excellent screencasts): http://blog.wekeroad.com/mvc-storefront/kona-3/ (click the “Download it here” link)

    Read the article

  • [WPF] ComboBox.Text not taking the ItemStringFormat property into account

    - by Thomas Levesque
    I just noticed a strange behavior which looks like a bug. Consider the following XAML : <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:sys="clr-namespace:System;assembly=mscorlib"> <Page.Resources> <x:Array x:Key="data" Type="{x:Type sys:String}"> <sys:String>Foo</sys:String> <sys:String>Bar</sys:String> <sys:String>Baz</sys:String> </x:Array> </Page.Resources> <StackPanel Orientation="Vertical"> <Button>Boo</Button> <ComboBox Name="combo" ItemsSource="{Binding Source={StaticResource data}}" ItemStringFormat="##{0}##" /> <TextBlock Text="{Binding Text, ElementName=combo}"/> </StackPanel> </Page> The ComboBox displays the values as "##Foo##", "##Bar##" and "##Baz##". But the TextBlock displays the selected values as "Foo", "Bar" and "Baz". So the ItemStringFormat is apparently ignored for the Text property... Is that a bug ? If it is, is there a workaround ? Or am I just doing something wrong ?

    Read the article

  • Programmatically implementing an interface that combines some instances of the same interface in var

    - by namin
    What is the best way to implement an interface that combines some instances of the same interface in various specified ways? I need to do this for multiple interfaces and I want to minimize the boilerplate and still achieve good efficiency, because I need this for a critical production system. Here is a sketch of the problem. Abstractly, I have a generic combiner class which takes the instances and specify the various combinators: class Combiner<I> { I[] instances; <T> T combineSomeWay(InstanceMethod<I,T> method) { // ... method.call(instances[i]) ... combined in some way ... } // more combinators } Now, let's say I want to implement the following interface among many others: Interface Foo { String bar(int baz); } I want to end up with code like this: class FooCombiner implements Foo { Combiner<Foo> combiner; @Override public String bar(final int baz) { return combiner.combineSomeWay(new InstanceMethod<Foo, String> { @Override public call(Foo instance) { return instance.bar(baz); } }); } } Now, this can quickly get long and winded if the interfaces have lots of methods. I know I could use a dynamic proxy from the Java reflection API to implement such interfaces, but method access via reflection is hundred times slower. So what are the alternatives to boilerplate and reflection in this case?

    Read the article

  • Skip makefile dependency generation for certain targets (e.g. `clean`)

    - by Shtééf
    I have several C and C++ projects that all follow a basic structure I've been using for a while now. My source files go in src/*.c, intermediate files in obj/*.[do], and the actual executable in the top level directory. My makefiles follow roughly this template: # The final executable TARGET := something # Source files (without src/) INPUTS := foo.c bar.c baz.c # OBJECTS will contain: obj/foo.o obj/bar.o obj/baz.o OBJECTS := $(INPUTS:%.cpp=obj/%.o) # DEPFILES will contain: obj/foo.d obj/bar.d obj/baz.d DEPFILES := $(OBJECTS:%.o=%.d) all: $(TARGET) obj/%.o: src/%.cpp $(CC) $(CFLAGS) -c -o $@ $< obj/%.d: src/%.cpp $(CC) $(CFLAGS) -M -MF $@ -MT $(@:%.d=%.o) $< $(TARGET): $(OBJECTS) $(LD) $(LDFLAGS) -o $@ $(OBJECTS) .PHONY: clean clean: -rm -f $(OBJECTS) $(DEPFILES) $(RPOFILES) $(TARGET) -include $(DEPFILES) Now I'm at the point where I'm packaging this for a Debian system. I'm using debuild to build the Debian source package, and pbuilder to build the binary package. The debuild step only has to execute the clean target, but even this causes the dependency files to be generated and included. In short, my question is really: Can I somehow prevent make from generating dependencies when all I want is to run the clean target?

    Read the article

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

  • How to get node without children in xQuery?

    - by mbrevoort
    So I have two nodes of elements that I'm essentially trying to join. I want the top level node to stay the same but the child nodes to be replaced by those cross referenced. Given: <stuff> <item foo="foo" boo="1"/> <item foo="bar" boo="2" /> <item foo="baz" boo="3"/> <item foo="blah boo="4""/> </stuff> <list a="1" b="2"> <foo>bar</foo> <foo>baz</foo> </list> I want to loop through "list" and cross reference elements in "stuff" for this result: <list a="1" b="2"> <item foo="bar" boo="2" /> <item foo="baz" boo="3"/> </list> I want to do this without having to know about what attributes might be on "list". In other words I don't want to have to explicitly call them out like attribute a { $list/@a }, attribute b { $list/@b }

    Read the article

  • Behavior of local variables in JavaScripts with()-statement

    - by thr
    I noticed some weird (and to my knowledge undefined behavior, by the ECMA 3.0 Spec at least), take the following snippet: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); alert(bar); } alert(bar); It crashes in both Firefox and Chrome, because "bar" doesn't exist in the first alert(); statement, this is as expected. But if you add a declaration of bar inside the with()-statement, so it looks like this: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); var bar = "g2"; alert(bar); } alert(bar); It will produce the following: undefined, 1, g2, undefined It seems as if you create a variable inside a with()-statement most browsers (tested on Chrome or Firefox) will make that variable exist outside that scope also, it's just set to undefined. Now from my perspective bar should only exist inside the with()-statement, and if you make the example even weirder: var foo = { bar: "1", baz: "2" }; var zoo; alert(bar); with(foo) { alert(bar); var bar = "g2"; zoo = function() { return bar; } alert(bar); } alert(bar); alert(zoo()); It will produce this: undefined, 1, g2, undefined, g2 So the bar inside the with()-statement does not exist outside of it, yet the runtime somehow "automagically" creates a variable named bar that is undefined in its top level scope (global or function) but this variable does not refer to the same one as inside the with()-statement, and that variable will only exist if a with()-statement has a variable named bar that is defined inside it. Very weird, and inconsistent. Anyone have an explanation for this behavior? There is nothing in the ECMA Spec about this.

    Read the article

  • detecting object-reference duplication across JavaScript files

    - by AnC
    I have a number of files with contents like this: function hello() { ... element1.text = foo.locale.lorem; element2.text = foo.locale.ipsum; ... elementn.text = foo.locale.whatever; ... } function world() { ... var label = bar.options.baz.blah; var toggle = bar.options.baz.use_toggle; ... } This could be written more efficiently, and also be more readable, by creating a shortcut to the locale object: function hello() { var loc = foo.locale; ... element1.text = loc.lorem; element2.text = loc.ipsum; ... elementn.text = loc.whatever; ... } function world() { var options = bar.options.baz; ... var label = options.blah; var toggle = options.use_toggle; ... } Is there a simple way to detect occurrences of such duplication for any arbitrary object (it's not always as simple as "locale", or foo.something)? Basically, I wanna know where lengthy object references appear two or more times within a function. Thanks!

    Read the article

  • How to test routes that don't include controller?

    - by Darren Green
    I'm using minitest in Rails to do testing, but I'm running into a problem that I hope a more seasoned tester can help me out with because I've tried looking everywhere for the answer, but it doesn't seem that anyone has run into this problem or if they have, they opted for an integration test. Let's say I have a controller called Foo and action in it called bar. So the foo_controller.rb file looks like this: class FooController < ApplicationController def bar render 'bar', :layout => 'application' end end The thing is that I don't want people to access the "foo/bar" route directly. So I have a route that is get 'baz' => 'foo#bar'. Now I want to test the FooController: require 'minitest_helper' class FooControllerTest < ActionController::TestCase def test_should_get_index get '/baz' end end But the test results in an error that No route matches {:controller=>"foo", :action=>"/baz"}. How do I specify the controller for the GET request? Sorry if this is a dumb question. It's been very hard for me to find the answer.

    Read the article

  • Eclipse > Javascript > Code highlighting not working with Object Notation

    - by Redsandro
    I am using Eclipse Helios with PDT, and when I am editing JavaScript files with the default JavaScript Editor (JSDT), code highlighting (Mark Occurrences) is not working for half of the code, for example JSON-style (or Object Literal if you will) declarations. Little example: Foo = {}; Foo.Bar = Foo.Bar || {}; Foo.Bar = { bar: function(str) { alert(str) }, baz: function(str) { this.bar(str); // This bar *is* highlighted though } }; Foo.Bar.baz('text'); No Bar, bar or baz is highlighted. For now, I humbly edit the JavaScript part of projects in Notepad++ because it just highlights every occurrence of whatever is currently selected. Is there a common practice for Eclipse JavaScript developers to get code highlighting work correctly, using the popular Object Literal notation? An option or update I missed? -update- I have found that code highlighting depends on the code being properly outlined. Altough commonly used, Object Literal outlining still seems rare in javascript editors. the Spket Javascript Editor does partial Object Literal outlining, and the Aptana Javascript Editor does full Object Literal outlining. But both loses other important functionality. A quest for the editor with the least loss of functionality is currently in progress in this question.

    Read the article

  • Coding the R-ight way - avoiding the for loop

    - by mropa
    I am going through one of my .R files and by cleaning it up a little bit I am trying to get more familiar with writing the code the r-ight way. As a beginner, one of my favorite starting points is to get rid of the for() loops and try to transform the expression into a functional programming form. So here is the scenario: I am assembling a bunch of data.frames into a list for later usage. dataList <- list (dataA, dataB, dataC, dataD, dataE ) Now I like to take a look at each data.frame's column names and substitute certain character strings. Eg I like to substitute each "foo" and "bar" with "baz". At the moment I am getting the job done with a for() loop which looks a bit awkward. colnames(dataList[[1]]) [1] "foo" "code" "lp15" "bar" "lh15" colnames(dataList[[2]]) [1] "a" "code" "lp50" "ls50" "foo" matchVec <- c("foo", "bar") for (i in seq(dataList)) { for (j in seq(matchVec)) { colnames (dataList[[i]])[grep(pattern=matchVec[j], x=colnames (dataList[[i]]))] <- c("baz") } } Since I am working here with a list I thought about the lapply function. My attempts handling the job with the lapply function all seem to look alright but only at first sight. If I write f <- function(i, xList) { gsub(pattern=c("foo"), replacement=c("baz"), x=colnames(xList[[i]])) } lapply(seq(dataList), f, xList=dataList) the last line prints out almost what I am looking for. However, if i take another look at the actual names of the data.frames in dataList: lapply (dataList, colnames) I see that no changes have been made to the initial character strings. So how can I rewrite the for() loop and transform it into a functional programming form? And how do I substitute both strings, "foo" and "bar", in an efficient way? Since the gsub() function takes as its pattern argument only a character vector of length one.

    Read the article

  • mod_rewrite for specific domains in a mappings file

    - by scott
    I have a bunch of domains that I want to go to one domain but various parts of that domain. # this is what I currently have RewriteEngine On RewriteCond %{HTTP_HOST} ^.*\.?foo\.com$ [NC] RewriteRule ^.*$ ${domainmappings:www.foo.com} [L,R=301] # rewrite map file www.foo.com www.domain.com/domain/foo.com.php www.bar.com www.domain.com/domain/bar.com.php www.baz.com www.domain.com/other/baz.php.foo The problem is that I don't want to have to have each domain be part of the RewriteCond. I tried RewriteCond %{HTTP_HOST} ^www\.(.*) RewriteRule (.*) http://%1/$1 [R=301,L] but that will do it for EVERY domain. I only want the domains that are in the mappings file to redirect, and then continue on to other rewrites if it doesn't match any domains in the mappings file.

    Read the article

  • Diff and ignore lines missing in one file

    - by Millianz
    I want to diff two files and ignore lines that are present in one file but missing in the other. For example File1: foo bar baz bat File2: foo ball bat I'm currently running the following diff command diff File1 File2 --changed-group-format='%>' --unchanged-group-format='' Which in this case would produce bar baz as the output, i.e. only missing or conflicting lines. I would like to only print conflicting lines, i.e. ignore cases where one line is missing from File2 and is present in File1 (not the other way around). Is there any way to do something like this using diff or do I have to resort to other tools? If so, what would you recommend?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • Regex to match a whole string only if it lacks a given substring/suffix

    - by Ivan Krechetov
    I've searched for questions like this, but all the cases I found were solved in a problem-specific manner, like using !g in vi to negate the regex matches, or matching other things, without a regex negation. Thus, I'm interested in a “pure” solution to this: Having a set of strings I need to filter them with a regular expression matcher so that it only leaves (matches) the strings lacking a given substring. For example, filtering out "Foo" in: Boo Foo Bar FooBar BooFooBar Baz Would result in: Boo Bar Baz I tried constructing it with negative look aheads/behinds (?!regex)/(?<!regex), but couldn't figure it out. Is that even possible?

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • How do I get autotest (ZenTest) to see my namespaced stuff?

    - by Blaine LaFreniere
    Autotest is supposed to map my tests to a class, I believe. When I have class Foo and class FooTest, autotest should see FooTest and say, "Hey, this test corresponds to the unit Foo, so I'll look for changes there and re-run tests when changes occur." And that works, however... When I have Foo::Bar and Foo::BarTest, autotest doesn't seem to make the connection, and whenever I edit Foo::Bar, autotest does not re-run Foo::BarTest Am I doing something wrong? EDIT: File structure might be helpful. Here it is: Module and class files: lib/foo.rb lib/foo/bar.rb lib/foo/baz.rb Test files: test/unit/foo/bar.rb test/unit/baz.rb I would think that autotest is able to make the connection between Foo::Bar and Foo::BarTest, but apparently it doesn't.

    Read the article

  • Attempting to extract a pattern within a string

    - by Brian
    I'm attempting to extract a given pattern within a text file, however, the results are not 100% what I want. Here's my code: import java.util.regex.Matcher; import java.util.regex.Pattern; public class ParseText1 { public static void main(String[] args) { String content = "<p>Yada yada yada <code> foo ddd</code>yada yada ...\n" + "more here <2004-08-24> bar<Bob Joe> etc etc\n" + "more here again <2004-09-24> bar<Bob Joe> <Fred Kej> etc etc\n" + "more here again <2004-08-24> bar<Bob Joe><Fred Kej> etc etc\n" + "and still more <2004-08-21><2004-08-21> baz <John Doe> and now <code>the end</code> </p>\n"; Pattern p = Pattern .compile("<[1234567890]{4}-[1234567890]{2}-[1234567890]{2}>.*?<[^%0-9/]*>", Pattern.MULTILINE); Matcher m = p.matcher(content); // print all the matches that we find while (m.find()) { System.out.println(m.group()); } } } The output I'm getting is: <2004-08-24> bar<Bob Joe> <2004-09-24> bar<Bob Joe> <Fred Kej> <2004-08-24> bar<Bob Joe><Fred Kej> <2004-08-21><2004-08-21> baz <John Doe> and now <code> The output I want is: <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-24> bar<Bob Joe> <2004-08-21> baz <John Doe> In short, the sequence of "date", "text (or blank)", and "name" must be extracted. Everything else should be avoided. For example the tag "Fred Kej" did not have any "date" tag before it, therefore, it should be flagged as invalid. Also, as a side question, is there a way to store or track the text snippets that were skipped/rejected as were the valid texts. Thanks, Brian

    Read the article

  • NHibernate: Subqueries.Exists not working

    - by cbp
    I am trying to get sql like the following using NHibernate's criteria api: SELECT * FROM Foo WHERE EXISTS (SELECT 1 FROM Bar WHERE Bar.FooId = Foo.Id AND EXISTS (SELECT 1 FROM Baz WHERE Baz.BarId = Bar.Id) So basically, Foos have many Bars and Bars have many Bazes. I want to get all Foos that have Bars with Bazes. To do this, a detached criteria seems best, like this: var subquery = DetachedCriteria.For<Bar>("bar") .SetProjection(Projections.Property("bar.Id")) .Add(Restrictions.Eq("bar.FooId","foo.Id")) // I have also tried replacing "bar.FooId" with "bar.Foo.Id" .Add(Restrictions.IsNotEmpty("bar.Bazes")); return Session.CreateCriteria<Foo>("foo") .Add(Subqueries.Exists(subquery)) .List<Foo>(); However this throws the exception: System.ArgumentException: Could not find a matching criteria info provider to: bar.FooId = foo.Id and bar.Bazes is not empty Is this a bug with NHibernate? Is there a better way to do this?

    Read the article

  • Python serialize lexical closures?

    - by dsimcha
    Is there a way to serialize a lexical closure in Python using the standard library? pickle and marshal appear not to work with lexical closures. I don't really care about the details of binary vs. string serialization, etc., it just has to work. For example: def foo(bar, baz) : def closure(waldo) : return baz * waldo return closure I'd like to just be able to dump instances of closure to a file and read them back. Edit: One relatively obvious way that this could be solved is with some reflection hacks to convert lexical closures into class objects and vice-versa. One could then convert to classes, serialize, unserialize, convert back to closures. Heck, given that Python is duck typed, if you overloaded the function call operator of the class to make it look like a function, you wouldn't even really need to convert it back to a closure and the code using it wouldn't know the difference. If any Python reflection API gurus are out there, please speak up.

    Read the article

  • iPhone app running in Instruments fails with unrecognized selector

    - by Mark Smith
    I have an app that appears to run without problems in normal use. The Clang Static Analyzer reports no problems either. When I try to run it in Instruments, it fails with an unrecognized selector exception. The offending line is a simple property setter of the form: self.bar = baz; To figure out what's going on, I added an NSLog() call immediately above it: NSLog(@"class = %@ responds = %d", [self class], [self respondsToSelector:@selector(setBar:)]); self.bar = baz; On the emulator (without Instruments) and on a device, this shows exactly what I'd expect: class = Foo responds = 1 When running under Instruments, I get: class = Foo responds = 0 I'm stumped as to what could cause this. Perhaps a different memory location is getting tromped on when it's in the Instruments environment? Can anyone suggest how I might debug this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >