Search Results

Search found 11944 results on 478 pages for 'struts2 json plugin'.

Page 439/478 | < Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • jQuery & Prototype Conflict

    - by DPereyra
    Hi, I am using the jQuery AutoComplete plugin in an html page where I also have an accordion menu which uses prototype. They both work perfectly separately but when I tried to implement both components in a single page I get an error that I have not been able to understand. uncaught exception: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIDOMViewCSS.getComputedStyle]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: file:///C:/Documents and Settings/Administrator/Desktop/website/js/jquery-1.2.6.pack.js :: anonymous :: line 11" data: no] I found out the file conflicting with jQuery is 'effects.js' which is used by the accordion menu. I tried replacing this file with a newer version but newer seems to break the accordion behavior. My guess is that the 'effects.js' file used in the accordion was modified to obtain the accordion demo output. I also tried using the overriding methods jQuery needs to avoid conflict with other libraries and that did not work. I obtained the accordion demo from the following site: http://www.stickmanlabs.com/accordion/ And the jQuery AutoComplete can be obtained from: http://docs.jquery.com/Plugins/Autocomplete#Setup Has any one else experienced this issue? Thanks.

    Read the article

  • How do I use HTML5's localStorage in a Google Chrome extension?

    - by davidkennedy85
    I am trying to develop an extension that will work with Awesome New Tab Page. I've followed the author's advice to the letter, but it doesn't seem like any of the script I add to my background page is being executed at all. Here's my background page: <script> var info = { poke: 1, width: 1, height: 1, path: "widget.html" } chrome.extension.onRequestExternal.addListener(function(request, sender, sendResponse) { if (request === "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-poke") { chrome.extension.sendRequest( sender.id, { head: "mgmiemnjjchgkmgbeljfocdjjnpjnmcg-pokeback", body: info, } ); } }); function initSelectedTab() { localStorage.setItem("selectedTab", "Something"); } initSelectedTab(); </script> Here is manifest.json: { "update_url": "http://clients2.google.com/service/update2/crx", "background_page": "background.html", "name": "Test Widget", "description": "Test widget for mgmiemnjjchgkmgbeljfocdjjnpjnmcg.", "icons": { "128": "icon.png" }, "version": "0.0.1" } Here is the relevant part of widget.html: <script> var selectedTab = localStorage.getItem("selectedTab"); document.write(selectedTab); </script> Every time, the browser just displays null. The local storage isn't being set at all, which makes me think the background page is completely disconnected. Do I have something wired up incorrectly?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

  • Using maven to distribute a swing application that can have each dependency individually tracked

    - by tms
    I'm moving my project to Maven and eventually OSGi. I currently distribute the project is a large Zip file with all the dependencies. Although my projects code is only 20% of the total package I have to redistribute all the dependency. With smaller independent modules this may be even less. Looking here on stack overflow it seems that to keep my current plan the maven-assembly-plugin should do the trick. I was considering having a base installer that would look at a XML manifest, then collect all the libraries that needed to be updated. This would mean that libraries that change occasionally would be downloaded less often. This also makes since for something like OSGi plugins (which could have independent release schedules). In essence I want my software to look and manage individual libraries, and download on demand (based on the manifest). I was wondering if there is a "maven way" of generating this Manifest and publishing all the libraries to a website? I believe the deploy life-cycle would do the second step. As an alternative, is there a OpenSource Java library that does this type of deployment? I don't want to embed Maven or something larger with the distributed code. The application is not for coders, the simpler the better, and the smaller the installer the better.

    Read the article

  • passing data from a servlet to javascript code in an Ajax application ?

    - by A.S al-shammari
    I have a simple jsp/servlet application and I want to add AJAX feature to this app. I use JQuery , but it doesn't matter what javascript framework I use. This is my code: <script type="text/javascript"> function callbackFunction(data){ $('#content').html(data); } $('document').ready(function(){ $('#x').click(function() { $.post('/ajax_2/servlet',callbackFunction) }); }); </script> <body> <a href="#" id="x">Increase it</a> <div id="content"></div> </body> </html> Servlet HttpSession session = request.getSession(); Integer myInteger = (Integer)session.getAttribute("myInteger"); if(myInteger == null) myInteger = new Integer(0); else myInteger = new Integer(myInteger+1); session.setAttribute("myInteger", myInteger); response.getWriter().println(myInteger); The Question: I use out.print to transfer data from a servlet to javascript code (ajax code) , but If I have a complex structure such as Vector of Object or something like this , what is the best way to transfer the data? what about an XML file , JSON ? Is there any special jsp/servlets library to transfer data from a servlet to ajax application ? How can I parse this data in callbackFunction ?

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • d3: Coloring Multiple Lines from Nested Data

    - by diet_coke
    I'm currently assembling some line graphs with circles at the datapoints from arrays of JSON objects formatted like so: var data = [{ "name": "metric1", "datapoints": [ [10.0, 1333519140], [48.0, 1333519200] ] }, { "name": "metric2", "datapoints": [ [48.0, 1333519200], [12.0, 1333519260] ] }] I want to have a color for each metric, so I'm trying to color them based on the index of the object within the array data. The code I have currently for just placing the circles looks like: // We bind an svg group to each metric. var metric_groups = this.vis.selectAll("g.metric_group") .data(data).enter() .append("g") .attr("class", "metric_group"); // Then bind a circle for each datapoint. var circles = metric_groups.selectAll("circle") .data(function(d) { return d.datapoints; }); circles.enter().append("circle") .attr("r", 3.5); Now if I change that last bit to something like: circles.enter().append("circle") .attr("r", 3.5); .style("fill", function(d,i) { return i%2 ? "red" : "blue"; } I get alternating red and blue circles, as could be expected. Taking some advice from Nested Selections : 'Nesting and Index', I tried: circles.enter().append("circle") .attr("r", 3.5); .style("fill", function(d,i,j) { return j%2 ? "red" : "blue"; } Which doesn't work (j is undefined), presumably because we are in the named property datapoints, rather than an array element. How might I go about doing the coloring that I want without changing my data structure? Thanks!

    Read the article

  • Update graph in real time from server

    - by user1869421
    I'm trying to update a graph with received data, so that the height of the bars increase as more data is received from the server via a websocket. But my code doesn't render a graph in the browser and plot the data points. I cannot see anything wrong with the code. I really need some help here please. ws = new WebSocket("ws://localhost:8888/dh"); var useData = [] //var chart; var chart = d3.select("body") .append("svg:svg") .attr("class", "chart") .attr("width", 420) .attr("height", 200); ws.onmessage = function(evt) { var distances = JSON.parse(evt.data); data = distances.miles; console.log(data); if(useData.length <= 10){ useData.push(data) } else { var draw = function(data){ // Set the width relative to max data value var x = d3.scale.linear() .domain([0, d3.max(useData)]) .range([0, 420]); var y = d3.scale.ordinal() .domain(useData) .rangeBands([0, 120]); var rect = chart.selectAll("rect") .data(useData) // enter rect rect.enter().append("svg:rect") .attr("y", y) .attr("width", x) .attr("height", y.rangeBand()); // update rect rect .attr("y", y) .attr("width", x) .attr("height", y.rangeBand()); var text = chart.selectAll("text") .data(useData) // enter text text.enter().append("svg:text") .attr("x", x) .attr("y", function (d) { return y(d) + y.rangeBand() / 2; }) .attr("dx", -3) // padding-right .attr("dy", ".35em") // vertical-align: middle .attr("text-anchor", "end") // text-align: right .text(String); // update text text .data(useData) .attr("x", x) .text(String); } useData.length = 0; } } Thanks

    Read the article

  • beforeClose not working in jGrowl?

    - by sparkymark75
    I have the following code which pulls json data from an ASP.NET page and displays these as notifications. The code will also take a note of what's been pulled through and store it in an array to prevent it being shown again in the same session. I'm now trying to implement functionality so that when the user closes a message, it's ID is recorded in a cookie to prevent it ever being shown again. To do this, I'm trying to write to the cookie when the beforeClose event fires. Everything else works fine apart from the saving to a cookie bit. Is there something wrong with my code that I'm missing? var alreadyGrowled = new Array(); var noteCookie = $.cookie("notificationsViewed"); if (noteCookie != null) { alreadyGrowled = noteCookie.split(","); } function growlCheckNew() { $.getJSON('getNotifications.aspx', function(data) { $(data).each(function(entryIndex, entry) { var newMessage = true; $(alreadyGrowled).each(function(index, msg_id) { if (entry['ID'] == msg_id) { newMessage = false; } }); if (newMessage == true) { $.jGrowl(entry['Message'], { sticky: true, header: entry['Title'], beforeClose: function(e, m) { $.cookie("notificationsViewed", entry['ID']); } }); } alreadyGrowled.push(entry['ID']); }); }); }

    Read the article

  • How to close InAppBrowser itself in Phonegap Application?

    - by Shashi
    I am developing Phonegap application and currently i am using InAppBrowser to display external pages. On some of the external pages I place a close button and i want to close the InAppBrowser itself. because InAppBrowser displays these pages that is why the reference of it is not accessed on itself to close it and Please do not suggest me to use ChildBrowser Plugin. window.close(); //Not Worked for me or iabRef.close(); //Also not Worked for me because iabRef is not accessible on InAppBrowser. It is created on Parent Window Some of the Android device and iOS device display a Done Button to close it. As well as the iPad also display the Done button. but in Case of Android tablet there is not any kind of button to close it. UPDATE :- Here is my full code :- var iabRef = null; function iabLoadStart(event) { } function iabLoadStop(event) { } function iabClose(event) { iabRef.removeEventListener('loadstart', iabLoadStart); iabRef.removeEventListener('loadstop', iabLoadStop); iabRef.removeEventListener('exit', iabClose); } function startInAppB() { var myURL=encodeURI('http://www.domain.com/some_path/mypage.html'); iabRef = window.open(myURL,'_blank', 'location=yes'); iabRef.addEventListener('loadstart', iabLoadStart); iabRef.addEventListener('loadstop', iabLoadStop); iabRef.addEventListener('exit', iabClose); }

    Read the article

  • Bizarre Bug with our Rails app in IE

    - by Callmeed
    We're experiencing a really bizarre bug in our Rails 2.3.4 app. This bug only happens in Internet Explorer (7 and 8). Here's what happens: A new customer creates an account at https://domain.com/signup/free (notice no subdomain) Their account is identified by a subdomain like "example.domain.com" After signing up, they get a welcome screen with a link to their account's home page They follow the link, then click the "log in" button and attempt to login Even though they provide valid credentials, the app redirects back to their account's root url ... they can never reach their admin area The only way they can login (on IE) is by quitting and re-opening IE ... then it works fine ... Something with their initial session is preventing them from logging in. If it matters, we are using restful_authentication and the ssl_requirement plugin ... I'm not sure if one or both of those has a problem with IE but we are stumped here. Also, I've read IE has an issue with subdomains that contain underscores ... this isn't what's going on.

    Read the article

  • 503 server response for Googlebot

    - by Hallik
    I put an .htaccess file in my webroot with the following contents RewriteBase / RewriteCond %{HTTP_USER_AGENT} ^.*(Googlebot|Googlebot|Mediapartners|Adsbot|Feedfetcher)-?(Google|Image)? [NC] RewriteRule .* /var/www/503.html This website is in maintenance mode, and I don't want anything indexed yet. I tested the code with a firefox User-Agent switcher plugin, and looking at the access log it shows this at the end of each log entry, but watching in TamperData or Firebug, it still returns a 200 server response instead of a 503. What am I doing wrong? "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" contents of /var/www/503.html <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2//EN"> <html> <head> <title>503 - Service temporary unavailable</title> </head> <body> <h1>503 - Service temporary unavailable</h1> <p>Sorry, this website is currently down for maintainance please retry later</p> </body> </html> I get this in my error log. LogLevel debug, would that go into the vhost in a specific place? Every answer I see on google is something different. Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace.

    Read the article

  • Zend_Auth and database session SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Prgramming jquery slider

    - by Mirage
    I want to program the jquery slider myself rather than using any plugin. But i want to know the basic idea. e.g I have <ul> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> </ul> I want to show horzontally only three items at one time and there will be arroes left and right end. I know jquery basics. But i don't know how should i do in steps. I mean when click on right arrow The left div should slide left and new div com right should come left ANy ideas in sequence what i need to do

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • JQuery drag, drop and save via cookie - how to?

    - by RussP
    Sorry to be back folks, but you guys & girls seem to know much more about this than I do ... anyhow, here is my question/problem I want to use drag, drop, sort (the interface plugin does me even though I have read it's out of date? but have looked at UI and to be honest is not clear and to me appears heavier than interface?) Anyhow, how do I set a cookie to save positions from this: $(document).ready( function () { $('a.closeEl').bind('click', toggleContent); $('div.groupWrapper').Sortable( { accept: 'groupItem', helperclass: 'sortHelper', activeclass : 'sortableactive', hoverclass : 'sortablehover', handle: 'div.itemHeader', tolerance: 'pointer', onChange : function(ser) { }, onStart : function() { $.iAutoscroller.start(this, document.getElementsByTagName('body')); }, onStop : function() { $.iAutoscroller.stop(); } } ); } ); var toggleContent = function(e) { var targetContent = $('div.itemContent', this.parentNode.parentNode); if (targetContent.css('display') == 'none') { targetContent.slideDown(300); $(this).html('[-]'); } else { targetContent.slideUp(300); $(this).html('[+]'); } return false; }; var ser = function (s) { serial = $.SortSerialize(s); alert(serial.hash); }; which is the "standard" interface demo, PLUS How do I then get to read that cookie so that when I next visit the page the order is as I set it in the cookie? Hopefully from that I can work out the rest .......? Thanks for help in advance.

    Read the article

  • mysql: storing arbitrary data

    - by Hailwood
    Background: I was asking a question on stack overflow regarding creating tables on the fly where this conversation ensued: This smells like a terrible idea! In fact, it smells just like this one. What in the world do you want to use this for? – deceze @deceze: very true, However, How else would you store the contents of these CSV files. They must be stored in mysql for indexing. The only solid fact about them is that they all have a mobile column with a standard format. The CSV can have an arbitrary amount of columns with an arbitrary amount of rows. They can (with no exaggeration) range from a single row, 35 column csv to an 80k row single column CSV. I am open to other ideas. – Hailwood There are many solutions for this, from attribute-value schemas to JSON storage and NoSQL storage. Open a new question about it. Whatever you do though, don't dynamically create tables! – deceze Question: So my question is, What would you say is the best way to store this data? Are you in agreement with deceze about not creating dynamic tables?

    Read the article

  • How to handle duplicate values in d3.js

    - by Mario
    First I'm a d3.js noob :) How you can see from the title I've got a problem with duplicated data and aggregate the values is no option, because the name represent different bus stops. In this example maybe the stops are on the fron side and the back side of a building. And of course I like to show the names on the x-axis. If i created an example and the result is a bloody mess, see jsFiddel. x = index name = bus stop name n = value I've got a json e.g.: [{ "x": 0, "name": "Corniche St / Abu Dhabi Police GHQ", "n": 113 }, { "x": 1, "name": "Corniche St / Nation Towers", "n": 116 }, { "x": 2, "name": "Zayed 1st St / Al Khalidiya Public Garden", "n": 146 }, ... { "x": 49, "name": "Hamdan St / Tariq Bin Zeyad Mosque", "n": 55 }] The problem: It is possible that the name could appear more then once e.g. { "x": 1, "name": "Corniche St / Nation Towers", "n": 116 } and { "x": 4, "name": "Corniche St / Nation Towers", "n": 105 } I like to know is there a way to tell d3.js not to use distinct names and instead just show all names in sequence with their values. Any ideas or suggestions are very welcome :) If you need more information let me know. Thanks in advanced Mario

    Read the article

  • Using HABTM relationships in cakephp plugins with unique set to false

    - by Dean
    I am working on a plugin for our CakePHP CMS that will handle blogs. When getting to the tags I needed to set the HABTM relationship to unique = false to be able add tags to a post without having to reset them all. The BlogPost model looks like this class BlogPost extends AppModel { var $name = 'BlogPost'; var $actsAs = array('Core.WhoDidIt', 'Containable'); var $hasMany = array('Blog.BlogPostComment'); var $hasAndBelongsToMany = array('Blog.BlogTag' => array('unique' => false), 'Blog.BlogCategory'); } The BlogTag model looks like this class BlogTag extends AppModel { var $name = 'BlogTag'; var $actsAs = array('Containable'); var $hasAndBelongsToMany = array('Blog.BlogPost'); } The SQL error I am getting when I have the unique = true setting in the HABTM relationship between the BlogPost and BlogTag is Query: SELECT `Blog`.`BlogTag`.`id`, `Blog`.`BlogTag`.`name`, `Blog`.`BlogTag`.`slug`, `Blog`.`BlogTag`.`created_by`, `Blog`.`BlogTag`.`modified_by`, `Blog`.`BlogTag`.`created`, `Blog`.`BlogTag`.`modified`, `BlogPostsBlogTag`.`blog_post_id`, `BlogPostsBlogTag`.`blog_tag_id` FROM `blog_tags` AS `Blog`.`BlogTag` JOIN `blog_posts_blog_tags` AS `BlogPostsBlogTag` ON (`BlogPostsBlogTag`.`blog_post_id` = 4 AND `BlogPostsBlogTag`.`blog_tag_id` = `Blog`.`BlogTag`.`id`) As you can see it is trying to set the blog_tags table to 'Blog'.'BlogTag. which isn't a valid MySQL name. When I remove the unique = true from the relationship it all works find and I can save one tag but when adding another it just erases the first one and puts the new one in its place. Does anyone have any ideas? is it a bug or am I just missing something? Cheers, Dean

    Read the article

  • Can't iterate over a list class in Python

    - by Vicky
    I'm trying to write a simple GUI front end for Plurk using pyplurk. I have successfully got it to create the API connection, log in, and retrieve and display a list of friends. Now I'm trying to retrieve and display a list of Plurks. pyplurk provides a GetNewPlurks function as follows: def GetNewPlurks(self, since): '''Get new plurks since the specified time. Args: since: [datetime.datetime] the timestamp criterion. Returns: A PlurkPostList object or None. ''' offset = jsonizer.conv_datetime(since) status_code, result = self._CallAPI('/Polling/getPlurks', offset=offset) return None if status_code != 200 else \ PlurkPostList(result['plurks'], result['plurk_users'].values()) As you can see this returns a PlurkPostList, which in turn is defined as follows: class PlurkPostList: '''A list of plurks and the set of users that posted them.''' def __init__(self, plurk_json_list, user_json_list=[]): self._plurks = [PlurkPost(p) for p in plurk_json_list] self._users = [PlurkUser(u) for u in user_json_list] def __iter__(self): return self._plurks def GetUsers(self): return self._users def __eq__(self, other): if other.__class__ != PlurkPostList: return False if self._plurks != other._plurks: return False if self._users != other._users: return False return True Now I expected to be able to do something like this: api = plurk_api_urllib2.PlurkAPI(open('api.key').read().strip(), debug_level=1) plurkproxy = PlurkProxy(api, json.loads) user = plurkproxy.Login('my_user', 'my_pass') ps = plurkproxy.GetNewPlurks(datetime.datetime(2009, 12, 12, 0, 0, 0)) print ps for p in ps: print str(p) When I run this, what I actually get is: <plurk.PlurkPostList instance at 0x01E8D738> from the "print ps", then: for p in ps: TypeError: __iter__ returned non-iterator of type 'list' I don't understand - surely a list is iterable? Where am I going wrong - how do I access the Plurks in the PlurkPostList?

    Read the article

  • How do I make custom functions chain-able with jQuery's?

    - by sergio
    I need a "callfront" or "precall" (the opposite of "callback" ¿?) to add in MANY places before an animation occurs in an existing plugin, To be used like e.g. $(some_unpredictable_obj).preFunct().animate(… The problem is, as I said they are MANY places, and all of them are different animations, on different objects. I can TELL where all of them occur, but I don't want to add over and over the same code. I actually have to add both a function before and after those animations, but I think I can use the callback for all of them. In a perfect world, I'd like to replace every animate(property, duration) by preFunct().animate(property,duration).postFunct() preFunct and postFunct don't need parameters, since they are always the same action, on the same object. This could be an amazing addition to "jQuery" (an easy way to jQuerize custom functions to be added to the normal chain (without messing with queues) I found this example but it will act on the applied element, and I don't want that because, as I said above, all the original animations to be added to are on different elements. I also found jQuery.timing, but it looks cooler the chain-able function :) Thanks.

    Read the article

  • Is there any other way of using signed applets

    - by 640KB
    Hi There, If I want to deploy high privileged applets they need to be signed. For that a certificate is created and then a jar file is signed with a jarsigner. After that in the HTML code one has to specify code,codebase AND archive (jar) which we signed before. However I wrote a servlet which acts as two things: it sits at the URL pointed by the codebase and serves class bytecode to the applet. The same servlet also uses serialization to communicate with the applet whereby whenever the applet gets a class it does not know it goes to the codebase which ends up back at the servlet. Almost like a mini RMI setup but simpler. I hope you can see the power in this. Unfortunately for signed applets one needs the archive. Now the servlet is also able to load a Certificate object and can send it to the applet too. So here is the setup: At one point the applet receives class bytecode and it also has the Certificate. It would be nice if the applet could instantiate all received classes using that certificate (otherwise code from jar is signed and outside is not which prompts nasty messages to the user). So my question to you fine Java aficionados: Would there by any way for me to use the bytecode data and the Certificate to instantiate the class as a signed object so that the plugin pops the Security dialog, accepts teh certificate and elevates the object's privileges. What I could find is that the there is a class CodeSource that accepts codebase URL and certificate and is essential to the signing process. What I am not sure is how one could intercept the class loading inside applets to install additional certificates not obtained through a JAR file via archive. What do you say? Thanks a bunch.

    Read the article

< Previous Page | 435 436 437 438 439 440 441 442 443 444 445 446  | Next Page >