Search Results

Search found 8285 results on 332 pages for 'console'.

Page 278/332 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Controller changes format on variables when publishing

    - by Christoffer
    I am a newbie to ROR but catching on quickly. I have been working on this problem for a couple of hours now and it seems like a bug. I does not make any sense. I have a database with the following migration: class CreateWebsites < ActiveRecord::Migration def self.up create_table :websites do |t| t.string :name t.integer :estimated_value t.string :webhost t.string :purpose t.string :description t.string :tagline t.string :url t.integer :adsense t.integer :tradedoubler t.integer :affiliator t.integer :adsense_cpm t.boolean :released t.string :empire_type t.string :oldid t.string :old_outlink_policy t.string :old_inlink_policy t.string :old_priority t.string :old_profitability t.integer :priority_id t.integer :project_id t.integer :outlink_policy_id t.integer :inlink_policy_id t.timestamps end end def self.down drop_table :websites end end I have verified that what is created in the database also is integers, strings etc according to this migration. I have not touched the controller after generating it through scaffold, i.e. it is the standard controller with show, index etc. Now. When I enter data into the database, either through the web form, in rails console or directly in the database - such as www.domain.com for url or 500 for adsense - it will be created in the db without problem. However, when it is being published on the website the variables go completely nuts. Adsense (integer) turns into date, url (string) turns into a float, and so on. This only happens to a few of the variables. This will also create a problem with "argument out of range" since I input 500 and Rails will try to output it as date = crash and "argument out of range". So, how do I fix/trouble shoot this? Why do the formats change? Could it be because of the respond_to in the controller? Cheers, Christoffer

    Read the article

  • Delphi TRttiType.GetMethods return zero TRttiMethod instances

    - by conciliator
    I've recently been able to fetch a TRttiType for an interface using TRttiContext.FindType using Robert Loves "GetType"-workaround ("registering" the interface by an explicit call to ctx.GetType, e.g. RType := ctx.GetType(TypeInfo(IMyPrettyLittleInterface));). One logical next step would be to iterate the methods of said interface. Consider program rtti_sb_1; {$APPTYPE CONSOLE} uses SysUtils, Rtti, mynamespace in 'mynamespace.pas'; var ctx: TRttiContext; RType: TRttiType; Method: TRttiMethod; begin ctx := TRttiContext.Create; RType := ctx.GetType(TypeInfo(IMyPrettyLittleInterface)); if RType <> nil then begin for Method in RType.GetMethods do WriteLn(Method.Name); end; ReadLn; end. This time, my mynamespace.pas looks like this: IMyPrettyLittleInterface = interface ['{6F89487E-5BB7-42FC-A760-38DA2329E0C5}'] procedure SomeProcedure; end; Unfortunately, RType.GetMethods returns a zero-length TArray-instance. Are there anyone able to reproduce my troubles? (Note that in my example I've explicitly fetched the TRttiType using TRttiContext.GetType, not the workaround; the introduction is included to warn readers that there might be some unresolved issues regarding rtti and interfaces.) Thanks!

    Read the article

  • Indenting Paragraph With cout

    - by Eric
    Given a string of unknown length, how can you output it using cout so that the entire string displays as an indented block of text on the console? (so that even if the string wraps to a new line, the second line would have the same level of indentation) Example: cout << "This is a short string that isn't indented." << endl; cout << /* Indenting Magic */ << "This is a very long string that will wrap to the next line because it is a very long string that will wrap to the next line..." << endl; And the desired output: This is a short string that isn't indented. This is a very long string that will wrap to the next line because it is a very long string that will wrap to the next line... Edit: The homework assignment I'm working on is complete. The assignment has nothing to do with getting the output to format as in the above example, so I probably shouldn't have included the homework tag. This is just for my own enlightment. I know I could count through the characters in the string, see when I get to the end of a line, then spit out a newline and output -x- number of spaces each time. I'm interested to know if there is a simpler, idiomatic C++ way to accomplish the above.

    Read the article

  • Spawning vim from a node git hook

    - by Lawrence Jones
    I've got a project purely in coffeescript, with git hooks for deployment also written in cs. I don't really want to break away from the language just to use bash for a quick commit message formatter, but I've got a problem spawning vim from the commit-msg hook. I've seen here that when piping to vim, the stdio is not necessarily set correctly to the tty streams. I get how that could cause a problem, but I don't exactly know how to get vim to load correctly using nodes spawn command. At the moment I have... vim = (require 'child_process').spawn('vim', [file], stdio: 'inherit') vim.on 'exit', (err) -> console.log "Exited! [#{err}]" cb?() ...which works fine to spawn a vim process that can r/w from the parents stdio, but when I use this in the hook things go wrong. Vim states that the stdio is not from terminal, and then once opened typing causes escape characters to pop up all over the place. Backspace for example, will produce ^?. Any help would be appreciated!

    Read the article

  • How to format the node_redis info function output?

    - by hh54188
    I want check the Redis info on my pc with node, so I use node_redis and run the info function: var redis = require("redis"), client = redis.createClient(); client.on("connect", function () { client.info(function (err, replay) { console.log(replay); }) }) but the response is un-format: `#Server\r\nredis_version:2.6.16\r\nredis_git_sha1:00000000\r\nredis_git_dirty:0\r\nredis_mode:standalone\r\nos:Linux 3.8.0-29-generic x86_64\r\narch_bits:64\r\nmultiplexing_api:epoll\r\ngcc_version:4.6.3\r\nprocess_id:2941\r\nrun_id:e60f261a6f4f6f081563a47961315eff6b1c005d\r\ntcp_port:6379\r\nuptime_in_seconds:1777\r\nuptime_in_days:0\r\nhz:10\r\nlru_clock:2040689\r\n\r\n# Clients\r\nconnected_clients:2\r\nclient_longest_output_list:0\r\nclient_biggest_input_buf:0\r\nblocked_clients:0\r\n\r\n# Memory\r\nused_memory:562584\r\nused_memory_human:549.40K\r\nused_memory_rss:2031616\r\nused_memory_peak:561784\r\nused_memory_peak_human:548.62K\r\nused_memory_lua:31744\r\nmem_fragmentation_ratio:3.61\r\nmem_allocator:jemalloc-3.2.0\r\n\r\n# Persistence\r\nloading:0\r\nrdb_changes_since_last_save:0\r\nrdb_bgsave_in_progress:0\r\nrdb_last_save_time:1383553917\r\nrdb_last_bgsave_status:ok\r\nrdb_last_bgsave_time_sec:-1\r\nrdb_current_bgsave_time_sec:-1\r\naof_enabled:0\r\naof_rewrite_in_progress:0\r\naof_rewrite_scheduled:0\r\naof_last_rewrite_time_sec:-1\r\naof_current_rewrite_time_sec:-1\r\naof_last_bgrewrite_status:ok\r\n\r\n# Stats\r\ntotal_connections_received:3\r\ntotal_commands_processed:5\r\ninstantaneous_ops_per_sec:0\r\nrejected_connections:0\r\nexpired_keys:0\r\nevicted_keys:0\r\nkeyspace_hits:0\r\nkeyspace_misses:0\r\npubsub_channels:0\r\npubsub_patterns:0\r\nlatest_fork_usec:0\r\n\r\n# Replication\r\nrole:master\r\nconnected_slaves:0\r\n\r\n# CPU\r\nused_cpu_sys:0.13\r\nused_cpu_user:0.19\r\nused_cpu_sys_children:0.00\r\nused_cpu_user_children:0.00\r\n\r\n# Keyspace\r\n' How can I turn it to an object? like: { redis_version:2.6.16, redis_git_sha1:00000000, redis_git_dirty:0, ...... } so that I can read each property's value, get information I need

    Read the article

  • How can you pre-define a variable that will contain an anonymous type?

    - by HitLikeAHammer
    In the simplified example below I want to define result before it is assinged. The linq queries below return lists of anonymous types. result will come out of the linq queries as an IEnumerable<'a but I can't define it that way at the top of the method. Is what I am trying to do possible (in .NET 4)? bool large = true; var result = new IEnumerable(); //This is wrong List<int> numbers = new int[]{1,2,3,4,5,6,7,8,9,10}.ToList<int>(); if (large) { result = from n in numbers where n > 5 select new { value = n }; } else { result = from n in numbers where n < 5 select new { value = n }; } foreach (var num in result) { Console.WriteLine(num.value); } EDIT: To be clear I know that I do not need anonymous types in the example above. It is just to illustrate my question with a small, simple example.

    Read the article

  • XmlDocument.Load() throws XmlSchemaValidationException

    - by Praetorian
    Hi, I'm trying to validate an XML document against a schema (which is embedded in my program as a resource). I got everything to work, so I tried to test for errors by adding a second sibling node in the XML at a location where the schema specifies maxOccurs="1". The problem is that my ValidationEventHandler is never getting called, also XmlDocument.Load() is throwing an XmlSchemaValidationException exception when I'd expected XmlDocument.Validate() to do that. This is the code I have: private void ValidateUserData( string xmlPath ) { var resInfo = Application.GetResourceStream( new Uri( @"MySchema.xsd", UriKind.Relative ) ); var schema = XmlSchema.Read( resInfo.Stream, SchemaValidationCallBack ); XmlSchemaSet schemaSet = new XmlSchemaSet(); schemaSet.Add( schema ); schemaSet.ValidationEventHandler += SchemaValidationCallBack; XmlReaderSettings settings = new XmlReaderSettings(); settings.Schemas = schemaSet; settings.ValidationType = ValidationType.Schema; XmlDocument doc = new XmlDocument(); using( XmlReader reader = XmlReader.Create( xmlPath, settings ) ) { doc.Load( reader ); // <-- This line throws an exception if XML is ill-formed reader.Close(); } doc.Validate( SchemaValidationCallBack );// <-- This is never reached } private void SchemaValidationCallBack( object sender, ValidationEventArgs e ) { Console.WriteLine( "SchemaValidationCallBack: " + e.Message ); } How do I get the callback to be called so I can handle validation errors? Thanks for your help!

    Read the article

  • Automaticaly update ActiveRecord object

    - by Aleksandr Koss
    I have same models: class Father < ActiveRecord::Base has_many :children end class Child < ActiveRecord::Base  belongs_to :father end Then do something like that: $ script/console test Loading test environment (Rails 2.3.5) >> @f1 = Father.create :test => "Father" => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2 = Father.find :first => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f1 == @f2 => true >> @f1.children => [] >> @f2.children => [] >> @f1.children.create :test => "Child1" => #<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15"> >> @f1.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] >> @f2.children => [] >> @f2.reload => #<Father id: 1, test: "Father", created_at: "2010-03-30 08:01:41", updated_at: "2010-03-30 08:01:41"> >> @f2.children => [#<Child id: 1, test: "Child1", father_id: 1, created_at: "2010-03-30 08:02:15", updated_at: "2010-03-30 08:02:15">] As you see rails cache @f2 object. To get actual data we should call reload. There is a way to automatically reload @f2 after children update without calling method "reload"?

    Read the article

  • C# Breakpoint Weirdness

    - by Dan
    In my program I've got two data files A and B. The data in A is static and the data in B refers back to the data in A. In order to make sure the data in B is invalidated when A is changed, I keep an identifier for each of the links which is a long byte-string identifying the data. I get this string using BitConverter on some of the important properties. My problem is that this scheme isn't working. I save the identifiers initially, and with I reload (with the exact same data in A) the identifiers don't match anymore. It seems the bit converter gives different results when I go to save. The really weird thing about it is, if I place a breakpoint in the save code, I can see the identifier it's writing to the file is fine, and the next load works. If I don't place a breakpoint and say print the identifiers to console instead, they're totally different. It's like when my program is running at full-speed the CPU messes up some instructions. This isn't the first time something like this happens to me. I've seen it in other projects. What gives? Has anyone every experienced this kind of debugging weirdness? I can't explain how stopping the program and not stopping it can change the output. Also, it's not a hardware problem because this happens on my laptop as well.

    Read the article

  • C# - default parameter values from previous parameter

    - by Sagar R. Kothari
    namespace HelloConsole { public class BOX { double height, length, breadth; public BOX() { } // here, I wish to pass 'h' to remaining parameters if not passed // FOLLOWING Gives compilation error. public BOX (double h, double l = h, double b = h) { Console.WriteLine ("Constructor with default parameters"); height = h; length = l; breadth = b; } } } // // BOX a = new BOX(); // default constructor. all okay here. // BOX b = new BOX(10,20,30); // all parameter passed. all okay here. // BOX c = new BOX(10); // Here, I want = length=10, breadth=10,height=10; // BOX d = new BOX(10,20); // Here, I want = length=10, breadth=20,height=10; Question is : 'To achieve above, Is 'constructor overloading' (as follows) is the only option? public BOX(double h) { height = length = breadth = h; } public BOX(double h, double l) { height = breadth = h; length = l; }

    Read the article

  • Permission denied to access property 'toString'

    - by Anders
    I'm trying to find a generic way of getting the name of Constructors. My goal is to create a Convention over configuration framework for KnockoutJS My idea is to iterate over all objects in the window and when I find the contructor i'm looking for then I can use the index to get the name of the contructor The code sofar (function() { constructors = {}; window.findConstructorName = function(instance) { var constructor = instance.constructor; var name = constructors[constructor]; if(name !== undefined) { return name; } var traversed = []; var nestedFind = function(root) { if(typeof root == "function" || traversed[root]) { return } traversed[root] = true; for(var index in root) { if(root[index] == constructor) { return index; } var found = nestedFind(root[index]); if(found !== undefined) { return found; } } } name = nestedFind(window); constructors[constructor] = name; return name; } })(); var MyApp = {}; MyApp.Foo = function() { }; var instance = new MyApp.Foo(); console.log(findConstructorName(instance)); The problem is that I get a Permission denied to access property 'toString' Exception, and i cant even try catch so see which object is causing the problem Fiddle http://jsfiddle.net/4ZwaV/

    Read the article

  • Can't get KnownType to work with WCF

    - by Kelly Cline
    I have an interface and a class defined in separate assemblies, like this: namespace DataInterfaces { public interface IPerson { string Name { get; set; } } } namespace DataObjects { [DataContract] [KnownType( typeof( IPerson ) ) ] public class Person : IPerson { [DataMember] public string Name { get; set; } } } This is my Service Interface: public interface ICalculator { [OperationContract] IPerson GetPerson ( ); } When I update my Service Reference for my Client, I get this in the Reference.cs: public object GetPerson() { return base.Channel.GetPerson(); I was hoping that KnownType would give me IPerson instead of "object" here. I have also tried [KnownType( typeof( Person ) ) ] with the same result. I have control of both client and server, so I have my DataObjects (where Person is defined) and DataInterfaces (where IPerson is defined) assemblies in both places. Is there something obvious I am missing? I thought KnownType was the answer to being able to use interfaces with WCF. ----- FURTHER INFORMATION ----- I removed the KnownType from the Person class and added [ServiceKnownType( typeof( Person ) ) ] to my service interface, as suggested by Richard. The client-side proxy still looks the same, public object GetPerson() { return base.Channel.GetPerson(); , but now it doesn't blow up. The client just has an "object", though, so it has to cast it to IPerson before it is useful. var person = client.GetPerson ( ); Console.WriteLine ( ( ( IPerson ) person ).Name );

    Read the article

  • Can not access response.body inside after filter block in Sinatra 1.0

    - by Petr Vostrel
    I'm struggling with a strange issue. According to http://github.com/sinatra/sinatra (secion Filters) a response object is available in after filter blocks in Sinatra 1.0. However the response.status is correctly accessible, I can not see non-empty response.body from my routes inside after filter. I have this rackup file: config.ru require 'app' run TestApp Then Sinatra 1.0.b gem installed using: gem install --pre sinatra And this is my tiny app with a single route: app.rb require 'rubygems' require 'sinatra/base' class TestApp < Sinatra::Base set :root, File.dirname(__FILE__) get '/test' do 'Some response' end after do halt 500 if response.empty? # used 500 just for illustation end end And now, I would like to access the response inside the after filter. When I run this app and access /test URL, I got a 500 response as if the response is empty, but the response clearly is 'Some response'. Along with my request to /test, a separate request to /favicon.ico is issued by the browser and that returns 404 as there is no route nor a static file. But I would expect the 500 status to be returned as the response should be empty. In console, I can see that within the after filter, the response to /favicon.ico is something like 'Not found' and response to /test really is empty even though there is response returned by the route. What do I miss?

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • How to properly translate the "var" result of a lambda expression to a concrete type?

    - by CrimsonX
    So I'm trying to learn more about lambda expressions. I read this question on stackoverflow, concurred with the chosen answer, and have attempted to implement the algorithm using a console app in C# using a simple LINQ expression. My question is: how do I translate the "var result" of the lambda expression into a usable object that I can then print the result? I would also appreciate an in-depth explanation of what is happening when I declare the outer => outer.Value.Frequency (I've read numerous explanations of lambda expressions but additional clarification would help) C# //Input : {5, 13, 6, 5, 13, 7, 8, 6, 5} //Output : {5, 5, 5, 13, 13, 6, 6, 7, 8} //The question is to arrange the numbers in the array in decreasing order of their frequency, preserving the order of their occurrence. //If there is a tie, like in this example between 13 and 6, then the number occurring first in the input array would come first in the output array. List<int> input = new List<int>(); input.Add(5); input.Add(13); input.Add(6); input.Add(5); input.Add(13); input.Add(7); input.Add(8); input.Add(6); input.Add(5); Dictionary<int, FrequencyAndValue> dictionary = new Dictionary<int, FrequencyAndValue>(); foreach (int number in input) { if (!dictionary.ContainsKey(number)) { dictionary.Add(number, new FrequencyAndValue(1, number) ); } else { dictionary[number].Frequency++; } } var result = dictionary.OrderByDescending(outer => outer.Value.Frequency); // How to translate the result into something I can print??

    Read the article

  • ~1 second TcpListener Pending()/AcceptTcpClient() lag

    - by cpf
    Probably just watch this video: http://screencast.com/t/OWE1OWVkO As you see, the delay between a connection being initiated (via telnet or firefox) and my program first getting word of it. Here's the code that waits for the connection public IDLServer(System.Net.IPAddress addr,int port) { Listener = new TcpListener(addr, port); Listener.Server.NoDelay = true;//I added this just for testing, it has no impact Listener.Start(); ConnectionThread = new Thread(ConnectionListener); ConnectionThread.Start(); } private void ConnectionListener() { while (Running) { while (Listener.Pending() == false) { System.Threading.Thread.Sleep(1); }//this is the part with the lag Console.WriteLine("Client available");//from this point on everything runs perfectly fast TcpClient cl = Listener.AcceptTcpClient(); Thread proct = new Thread(new ParameterizedThreadStart(InstanceHandler)); proct.Start(cl); } } (I was having some trouble getting the code into a code block) I've tried a couple different things, could it be I'm using TcpClient/Listener instead of a raw Socket object? It's not a mandatory TCP overhead I know, and I've tried running everything in the same thread, etc.

    Read the article

  • c++ use of winmain()

    - by Jack
    Hi, I just started learning programming for windows in c++. I had this crazy image, that win32 programming is based on calling windows functions and sending parameters to and from them. Like, when you want to create window, you call some win32 function that handles windows GUI and say "Hi, please, create me new window, 100 x 100 px, with two buttons", and that GUI function says "Hi, no problem, when something happends, like user clicks one button, I will change this variable xy located in this location". So, I thought that it will be very similiar to console programming. But the very first instruction surprised me. I always thought that every program executes main() function first. So, when I launch app, windows stores some parameters on top of stack and run that application. So I assumed that initializing main() is just a c++ way to tell the compiler where the first instruction should be. But in win32 programming, there is function called winmain() which starts first. So I am little confused. I thought it´s rule that compiler must have main() to start with, that main just defines where ti start, like some start point identifier. So, please, why is there winmain() function instead of main()? When I thought that C++ programming is as logical as assembler, it confuses me once again.

    Read the article

  • How to debug a Gruntfile with breakpoints using node-inspector?

    - by Kris Hollenbeck
    So I have spent the past couple days trying to get this to work with no luck. Most of the solutions I have found seem to work "okay" for debugging node applications. But I haven't had much luck debugging grunt stand alone. I would like to be able to set breakpoints in my gruntfile and either step through the code with either the browser or an IDE. I have tried the following: Debugging using intelliJ IDE Using Grunt Console (Process finished with exit code 6) Debugging with Nodeeclipse (This sort of works okay but doesn't hit the breakpoints set in eclipse, not very intuitive) Debugging using node-inspector (This one also sort of works. I can step through a little ways using F11 and F10 in chrome. But eventually it just crashes. Using F8 to skip to break point never works.) ERROR MESSAGE USING NODE-INSPECTOR So currently node-inspector feels like it has gotten me the closest to what I want. To get here I did the following: From my grunt directory I ran the following commands: grunt node-inspector node --debug-brk Gruntfile.js And then from there I went to localhost:8080/debug?port=5858 to debug my Gruntfile.js. But like I mentioned above, as soon as I hit F8 to skip to breakpoint it crashes with the above error. Has anybody had any success using this method to try to debug a Gruntfile? So far from my search efforts I have not found a very well documented way of doing this. So hopefully this will be useful or beneficial information for future users. Also I am using Windows 7 by the way. Thanks in advance.

    Read the article

  • Multiple websites, Single sign-on design

    - by Yannis
    Hi all, I have a question. A client I have been doing some work recently has a range of websites with different login mechanisms. He is looking to slowly migrate to a single sign-on mechanism for his websites (all written in asp.net mvc). I am looking at my options here, so here is a list of requirements: 1) It has to be secure (duh) 2) It needs to support extra user properties over and above the usual name, address stuff (such as money or credits for a user) 3) It has to provide a centralized user management web console for his convenience (I understand that this will be a small project on top of whatever design solution I choose to go for) 4) It has to integrate with the existing websites without re-engineering the whole product (I understand that this depends on the current product implementation). 5) It has to deal with emailing the user when he registers (in order for him to activate his account) 6) It has to deal with activating the user when he clicks the activate me link in the email (I understand that 5 and 6 require some form of email templating system to support different emails per application) I was thinking of creating a library working together with forms authentication that exposes whatever methods are required (e.g. login, logout, activate, etc. and a small restful service to implement activation from email, registration processing etc. Taking into account that loads of things have been left out to make this question brief and to the point, does this sound like a good design? But this looks like a very common problem so arent there any existing projects that I could use? Thanks for reading.

    Read the article

  • Delayed_job not executing the perform method but emptying the job queue

    - by James
    I have a fresh rails 3 app, here's my Gemfile: source 'http://rubygems.org' gem 'rails', '3.0.0' gem 'delayed_job' gem 'sqlite3-ruby', :require = 'sqlite3' Here's the class that represents the job that I want to queue: class Me < Struct.new(:something) def perform puts "Hello from me" logger.info "Hello from me" logger.debug "Hello from me" raise Exception.new end end From the console with no workers running: irb(main):002:0> Delayed::Job.enqueue Me.new(1) => #<Delayed::Backend::ActiveRecord::Job id: 7, priority: 0, attempts: 0, handler: "--- !ruby/struct:Me \nsomething: 1\n", last_error: nil, run_at: "2010-12-29 07:24:11", locked_at: nil, failed_at: nil, locked_by: nil, created_at: "2010-12-29 07:24:11", updated_at: "2010-12-29 07:24:11"> Like I mentioned: there are no workers running: irb(main):003:0> Delayed::Job.all => [#<Delayed::Backend::ActiveRecord::Job id: 7, priority: 0, attempts: 0, handler: "--- !ruby/struct:Me \nsomething: 1\n", last_error: nil, run_at: "2010-12-29 07:24:11", locked_at: nil, failed_at: nil, locked_by: nil, created_at: "2010-12-29 07:24:11", updated_at: "2010-12-29 07:24:11">] I start a worker with script/delayed_job run The queue gets emptied: irb(main):006:0> Delayed::Job.all => [] However, nothing happens as a result of the puts, nothing is logged from the logger calls, and no exception is raised. I'd appreciate any help / insight or anything to try.

    Read the article

  • Passing pointer into function, data appears initialized in function, on return appears uninitialize

    - by Luke Mcneice
    Im passing function GetCurrentDate() the pointer to a tm struct. Within that function I printf the uninitialized data, then the initialized. Expected results. However when i return the tm struct appears uninitialized. See console output bellow. What am i doing wrong? uninitialized date:??? ???-1073908332 01:9448278:-1073908376 -1217355836 initialized date:Wed May 5 23:08:40 2010 Caller date:??? ???-1073908332 01:9448278:-1073908376 -121735583 int main() { test(); } int test() { struct tm* CurrentDate; GetCurrentDate(CurrentDate); printf("Caller date:%s\n",asctime (CurrentDate)); return 1; } int GetCurrentDate(struct tm* p_ReturnDate) { printf("uninitialized date:%s\n",asctime (p_ReturnDate)); time_t m_TimeEntity; m_TimeEntity = time(NULL); //setting current time into a time_t struct p_ReturnDate = localtime(&m_TimeEntity); //converting time_t to tm struct printf("initialized date:%s\n",asctime (p_ReturnDate)); return 1; }

    Read the article

  • Why is debugging better in an IDE?

    - by Bill Karwin
    I've been a software developer for over twenty years, programming in C, Perl, SQL, Java, PHP, JavaScript, and recently Python. I've never had a problem I could not debug using some careful thought, and well-placed debugging print statements. I respect that many people say that my techniques are primitive, and using a real debugger in an IDE is much better. Yet from my observation, IDE users don't appear to debug faster or more successfully than I can, using my stone knives and bear skins. I'm sincerely open to learning the right tools, I've just never been shown a compelling advantage to using visual debuggers. Moreover, I have never read a tutorial or book that showed how to debug effectively using an IDE, beyond the basics of how to set breakpoints and display the contents of variables. What am I missing? What makes IDE debugging tools so much more effective than thoughtful use of diagnostic print statements? Can you suggest resources (tutorials, books, screencasts) that show the finer techniques of IDE debugging? Sweet answers! Thanks much to everyone for taking the time. Very illuminating. I voted up many, and voted none down. Some notable points: Debuggers can help me do ad hoc inspection or alteration of variables, code, or any other aspect of the runtime environment, whereas manual debugging requires me to stop, edit, and re-execute the application (possibly requiring recompilation). Debuggers can attach to a running process or use a crash dump, whereas with manual debugging, "steps to reproduce" a defect are necessary. Debuggers can display complex data structures, multi-threaded environments, or full runtime stacks easily and in a more readable manner. Debuggers offer many ways to reduce the time and repetitive work to do almost any debugging tasks. Visual debuggers and console debuggers are both useful, and have many features in common. A visual debugger integrated into an IDE also gives you convenient access to smart editing and all the other features of the IDE, in a single integrated development environment (hence the name).

    Read the article

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • How do I post a .wav file from CS5 Flash, AS3 to a Java servlet?

    - by Muostar
    Hi, I am trying to send a byteArray from my .fla to my running tomcat server integrated in Eclipse. From flash I am using the following code: var loader:URLLoader = new URLLoader(); var header:URLRequestHeader = new URLRequestHeader("audio/wav", "application/octet-stream"); var request:URLRequest = new URLRequest("http://localhost:8080/pdp/Server?wav=" + tableID); request.requestHeaders.push(header); request.method = URLRequestMethod.POST; request.data = file;//wav; loader.load(request); And my java servlet looks as follows: try{ int readBytes = -1; int lengthOfBuffer = request.getContentLength(); InputStream input = request.getInputStream(); byte[] buffer = new byte[lengthOfBuffer]; ByteArrayOutputStream output = new ByteArrayOutputStream(lengthOfBuffer); while((readBytes = input.read(buffer, 0, lengthOfBuffer)) != -1) { output.write(buffer, 0, readBytes); } byte[] finalOutput = output.toByteArray(); input.close(); FileOutputStream fos = new FileOutputStream(getServletContext().getRealPath(""+"/temp/"+wav+".wav")); fos.write(finalOutput); fos.close(); When i run the flash .swf file and send the bytearray to the server, I receive following in the server's console window:: (loads of loads of Chinese symbols) May 20, 2010 7:04:57 PM org.apache.tomcat.util.http.Parameters processParameters WARNING: Parameters: Character decoding failed. Parameter '? (loads of loads of Chinese symbols) and then looping this for a long time. It is like I recieve the bytes but not encoding/decoding them correctly. What can I do?

    Read the article

  • Rails: creating a custom data type, to use with generator classes and a bunch of questions related t

    - by Shyam
    Hi, After being productive with Rails for some weeks, I learned some tricks and got some experience with the framework. About 10 days ago, I figured out it is possible to build a custom data type for migrations by adding some code in the Table definition. Also, after learning a bit about floating points (and how evil they are) vs integers, the money gem and other possible solutions, I decided I didn't WANT to use the money gem, but instead try to learn more about programming and finding a solution myself. Some suggestions said that I should be using integers, one for the whole numbers and one for the cents. When playing in script/console, I discovered how easy it is to work with calculations and arrays. But, I am talking to much (and the reason I am, is to give some sufficient background). Right now, while playing with the scaffold generator (yes, I use it, because I like they way I can quickly set up a prototype while I am still researching my objectives), I like to use a DRY method. In my opinion, I should build a custom "object", that can hold two variables (Fixnum), one for the whole, one for the cents. In my big dream, I would be able to do the following: script/generate scaffold Cake name:string description:text cost:mycustom Where mycustom should create two integer columns (one for wholes, one for cents). Right now I could do this by doing: script/generate scaffold Cake name:string description:text cost_w:integer cost_c:integer I had also had an idea that would be creating a "cost model", which would hold two columns of integers and create a cost_id column to my scaffold. But wouldn't that be an extra table that would cause some kind of performance penalty? And wouldn't that be defy the purpose of the Cake model in the first place, because the costs are an attribute of individual Cake entries? The reason why I would want to have such a functionality because I am thinking of having multiple "costs" inside my rails application. Thank you for your feedback, comments and answers! I hope my message got through as understandable, my apologies for incorrect grammar or weird sentences as English is not my native language.

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >