Search Results

Search found 16311 results on 653 pages for 'environment variables'.

Page 617/653 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • CSS optimization - extra classes in dom or preprocessor-repetitive styling in css file?

    - by anna.mi
    I'm starting on a fairly large project and I'm considering the option of using LESS for pre-processing my css. the useful thing about LESS is that you can define a mixin that contains for example: .border-radius(@radius) { -webkit-border-radius: @radius; -moz-border-radius: @radius; -o-border-radius: @radius; -ms-border-radius: @radius; border-radius: @radius; } and then use it in a class declaration as .rounded-div { .border-radius(10px); } to get the outputted css as: .rounded-div { -webkit-border-radius: 10px; -moz-border-radius: 10px; -o-border-radius: 10px; -ms-border-radius: 10px; border-radius: 10px; } this is extremely useful in the case of browser prefixes. However this same concept could be used to encapsulate commonly-used css, for example: .column-container { overflow: hidden; display: block; width: 100%; } .column(@width) { float: left; width: @width; } and then use this mixin whenever i need columns in my design: .my-column-outer { .column-container(); background: red; } .my-column-inner { .column(50%); font-color: yellow; } (of course, using the preprocessor we could easily expand this to be much more useful, eg. pass the number of columns and the container width as variables and have LESS determine the width of each column depending on the number of columns and container width!) the problem with this is that when compliled, my final css file would have 100 such declarations, copy&pasted, making the file huge and bloated and repetitive. The alternative to this would be to use a grid system which has predefined classes for each column-layout option, eg .c-50 ( with a "float: left; width:50%;" definition ), .c-33, .c-25 to accomodate for a 2-column, 3-column and 4-column layout and then use these classes to my dom. i really mislike the idea of the extra classes, from experience it results to bloated dom (creating extra divs just to attach the grid classes to). Also the most basic tutorial for html/css would tell you that the dom should be separated from the styling - grid classes are styling related! to me, its the same as attaching a "border-radius-10" class to the .rounded-div example above! on the other hand, the large css file that would result from the repetitive code is also a disadvantage so i guess my question is, which one would you recommend? and which do you use? and, which solution is best for optimization? apart from the larger file size, has there even been any research on whether browser renders multiple classes faster than a large css file, or the other way round? tnx! i'd love to hear your opinion!

    Read the article

  • Howoto get id of new record after model.save

    - by tonymarschall
    I have a model with the following db structure: mysql> describe units; +------------+----------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+----------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | name | varchar(128) | NO | | NULL | | | created_at | datetime | NO | | NULL | | | updated_at | datetime | NO | | NULL | | +------------+----------+------+-----+---------+----------------+ 7 rows in set (0.00 sec) After creating a new record an saving i can not get the id of the record. 1.9.3p194 :001 > unit = Unit.new(:name => 'test') => #<Unit id: nil, name: "test", created_at: nil, updated_at: nil> 1.9.3p194 :002 > unit.save (0.2ms) BEGIN SQL (0.3ms) INSERT INTO `units` (`created_at`, `name`, `updated_at`) VALUES ('2012-08-31 23:48:12', 'test', '2012-08-31 23:48:12') (144.6ms) COMMIT => true 1.9.3p194 :003 > unit.inspect => "#<Unit id: nil, name: \"test\", created_at: \"2012-08-31 23:48:12\", updated_at: \"2012-08-31 23:48:12\">" # unit.rb class Unit < ActiveRecord::Base attr_accessible :name end # migration class CreateUnits < ActiveRecord::Migration def change create_table :units do |t| t.string :name, :null => false t.timestamps end end end Tried this with other models and have the same result (no id). Data is definitily saved and i can get data with Unit.last Another try with Foo.id = nil # /var/www# rails g model Foo name:string invoke active_record create db/migrate/20120904030554_create_foos.rb create app/models/foo.rb invoke test_unit create test/unit/foo_test.rb create test/fixtures/foos.yml # /var/www# rake db:migrate == CreateFoos: migrating ===================================================== -- create_table(:foos) -> 0.3451s == CreateFoos: migrated (0.3452s) ============================================ # /var/www# rails c Loading development environment (Rails 3.2.8) 1.9.3p194 :001 > foo = Foo.new(:name => 'bar') => #<Foo id: nil, name: "bar", created_at: nil, updated_at: nil> 1.9.3p194 :002 > foo.save (0.2ms) BEGIN SQL (0.4ms) INSERT INTO `foos` (`created_at`, `name`, `updated_at`) VALUES ('2012-09-04 03:06:26', 'bar', '2012-09-04 03:06:26') (103.2ms) COMMIT => true 1.9.3p194 :003 > foo.inspect => "#<Foo id: nil, name: \"bar\", created_at: \"2012-09-04 03:06:26\", updated_at: \"2012-09-04 03:06:26\">" 1.9.3p194 :004 > Foo.last Foo Load (0.5ms) SELECT `foos`.* FROM `foos` ORDER BY `foos`.`id` DESC LIMIT 1 => #<Foo id: 1, name: "bar", created_at: "2012-09-04 03:06:26", updated_at: "2012-09-04 03:06:26">

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • choosing an image locally from http url and serving that image without a server round trip

    - by serverman
    Hi folks I am a complete novice to Flash (never created anything in flash). I am quite familiar with web applications (J2EE based) and have a reasonable expertise in Javascript. Here is my requirement. I want the user to select (via an html form) an image. Normally in the post, this image would be sent to server and may be stored there to be served later. I do not want that. I want to store this image locally and then serve it via HTTP to the user. So, the flow is: 1. Go to the "select image url":mywebsite.com/selectImage Browse the image and select the image This would transfer control locally to some code running on the client (Javascript or flash), which would then store the image locally at some place on the client machine. Go to the "show image url": mywebsite.com/showImage This would eventually result in some client code running on the browser that retrieves the image and renders it (without any server round trips.) I considered the following options: Use HTML5 local storage. Since I am a complete novice to flash, I looked into this. I found that it is fairly straightforward to store and retrieve images in javascript (only strings are allowed but I am hoping storing base64 encoded strings would work at least for small images). However, how do I serve the image via http url that points to my server without a server round trip? I saw the interesting article at http://hacks.mozilla.org/category/fileapi/ but that would work only in firefox and I need to work on all latest browsers (at least the ones supporting HTML5 local storage) Use flash SharedObjects. OK, this would have been good - the only thing is I am not sure where to start. Snippets of actionscripts to do this are scattered everywhere but I do not know how to use those scripts in an actual html page:) I do not need to create any movies or anything - just need to store an image and serve it locally. If I go this route, I would also use it to store other "strings" locally. If you suggest this, please give me the exact steps (could be pointers to other web sites) on how to do this. I would like to avoid paying for any flash development environment software ideally:) Thank you!

    Read the article

  • How to fix this Speech Recognition wicked bug?

    - by aF
    I have this code in my C# project: public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand(); /// Create an instance of SpSharedRecoContextClass which will be used /// to interface with the incoming audio stream recContext = new SpSharedRecoContextClass(); // Create the grammar object recContext.CreateGrammar(1, out recGrammar); //recContext.CreateGrammar(2, out recGrammar2); // Set up dictation mode //recGrammar2.SetDictationState(SpeechLib.SPRULESTATE.SPRS_ACTIVE); //recGrammar2.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); // Set appropriate grammar mode if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); //recGrammar.SetDictationState(SpeechLib.SPRULESTATE.SPRS_INACTIVE); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } /// Bind a callback to the recognition event which will be invoked /// When a dictated phrase has been recognised. recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); // System.Windows.Forms.MessageBox.Show(recContext.ToString()); // gramática compilada } } private static void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; // System.Windows.Forms.MessageBox.Show(temp); // System.Windows.Forms.MessageBox.Show(recognizedWords.Count.ToString()); foreach (string word in recognizedWords) { if (temp.Contains(word)) { // System.Windows.Forms.MessageBox.Show("yes"); _recognizedText = word; } } } This codes generates a dll that I use in another application. Now, the wicked bug: - when I run the startRecognition method in the beginning of the execution of the other application, this codes works very well. But when I run it some time after the beginning, this codes works but the handleRecognition method is never called. I see that the words are recognized because they appear on the Microsoft Speech Recognition app, but the handler method is never called. Do you know what's the problem with this code? NOTE: this project has some code that is allways being executed. Might that be the problem? Because the other code is running it doesn't allow it to this to run?

    Read the article

  • Can I have a workspace that is both a git workspace and a svn workspace?

    - by Troy
    I have checked out now a local working copy of a codebase that lives in an svn repo. It's a big Java project that I use Eclipse to develop in. Eclipse of course builds everything on the fly, in it's own way with all the binaries ending up in [project root]/bin. That's perfectly fine with me, for development, but when the build runs on the build server, it looks quite a lot different (maven build, binaries end up in a different directory structure, etc). Sometimes I need to recreate the build server environment on my local development system to debug the build or what have you, so I usually end up downloading an entirely new working copy into a new workspace and running the build from there (prevents cluttering my development workspace with all the build artifacts and dirtying up the working copy). Of course sometimes I'm interested in running the full build on code that I don't want to check in yet, so I will manually copy over the "development" workspace onto the "build" workspace. Besides taking a lot of extra time copying a lot of files that I don't actually need (just overlaying the new over the old), this also screws up my svn metadata, meaning that I can't check in changes from that "build workspace" working copy, and I often end up having to re-download the code to get it back into a known state. So I'm thinking I make my svn working copy a local git repo, then "check out" the in-development code from the svn working copy/git master, into the local build workspace. Then I can build, revert my changes, have all the advantages of a version controlled working copy in the build workspace. Then if I need to make changes to the build, push those back into the git master (which is also a svn working copy), then check them into the main svn repo. |-------------| |main svn repo| <------- |---------------------| |-------------| |svn working copy | <------- |--------------------| | (svn dev workspace/ | | non-svn-versioned | | git master) | | build workspace | |---------------------| | (git working copy) | |--------------------| Just switching everything to git would obviously be better, but, big company, too many people using svn, too costly to change everything, etc. We're stuck with svn as the main repo for now. BTW, I know there is a maven plugin for Eclipse and everything, I'm mainly interested to know if there is a way to maintain a workspace that is both a git working copy and an svn working copy. Actually any distributed version control system would probably work (hg possibly?). Advice? How does everybody else handle this situation of having a to manage both a "development" build process and a "production" build process?

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Dynamic loaded libraries and shared global symbols

    - by phlipsy
    Since I observed some strange behavior of global variables in my dynamically loaded libraries, I wrote the following test. At first we need a statically linked library: The header test.hpp #ifndef __BASE_HPP #define __BASE_HPP #include <iostream> class test { private: int value; public: test(int value) : value(value) { std::cout << "test::test(int) : value = " << value << std::endl; } ~test() { std::cout << "test::~test() : value = " << value << std::endl; } int get_value() const { return value; } void set_value(int new_value) { value = new_value; } }; extern test global_test; #endif // __BASE_HPP and the source test.cpp #include "base.hpp" test global_test = test(1); Then I wrote a dynamically loaded library: library.cpp #include "base.hpp" extern "C" { test* get_global_test() { return &global_test; } } and a client program loading this library: client.cpp #include <iostream> #include <dlfcn.h> #include "base.hpp" typedef test* get_global_test_t(); int main() { global_test.set_value(2); // global_test from libbase.a std::cout << "client: " << global_test.get_value() << std::endl; void* handle = dlopen("./liblibrary.so", RTLD_LAZY); if (handle == NULL) { std::cout << dlerror() << std::endl; return 1; } get_global_test_t* get_global_test = NULL; void* func = dlsym(handle, "get_global_test"); if (func == NULL) { std::cout << dlerror() << std::endl; return 1; } else get_global_test = reinterpret_cast<get_global_test_t*>(func); test* t = get_global_test(); // global_test from liblibrary.so std::cout << "liblibrary.so: " << t->get_value() << std::endl; std::cout << "client: " << global_test.get_value() << std::endl; dlclose(handle); return 0; } Now I compile the statically loaded library with g++ -Wall -g -c base.cpp ar rcs libbase.a base.o the dynamically loaded library g++ -Wall -g -fPIC -shared library.cpp libbase.a -o liblibrary.so and the client g++ -Wall -g -ldl client.cpp libbase.a -o client Now I observe: The client and the dynamically loaded library possess a different version of the variable global_test. But in my project I'm using cmake. The build script looks like this: CMAKE_MINIMUM_REQUIRED(VERSION 2.6) PROJECT(globaltest) ADD_LIBRARY(base STATIC base.cpp) ADD_LIBRARY(library MODULE library.cpp) TARGET_LINK_LIBRARIES(library base) ADD_EXECUTABLE(client client.cpp) TARGET_LINK_LIBRARIES(client base dl) analyzing the created makefiles I found that cmake builds the client with g++ -Wall -g -ldl -rdynamic client.cpp libbase.a -o client This ends up in a slightly different but fatal behavior: The global_test of the client and the dynamically loaded library are the same but will be destroyed two times at the end of the program. Am I using cmake in a wrong way? Is it possible that the client and the dynamically loaded library use the same global_test but without this double destruction problem?

    Read the article

  • ActionScript MovieClip moves to the left, but not the right

    - by Defcon
    I have a stage with a movie clip with the instance name of "mc". Currently I have a code that is suppose to move the player left and right, and when the left or right key is released, the "mc" slides a little bit. The problem I'm having is that making the "mc" move to the left works, but the exact some code used for the right doesn't. All of this code is present on the Main Stage - Frame One //Variables var mcSpeed:Number = 0;//MC's Current Speed var mcJumping:Boolean = false;//if mc is Jumping var mcFalling:Boolean = false;//if mc is Falling var mcMoving:Boolean = false;//if mc is Moving var mcSliding:Boolean = false;//if mc is sliding var mcSlide:Number = 0;//Stored for use when creating slide var mcMaxSlide:Number = 1.6;//Max Distance the object will slide. //Player Move Function p1Move = new Object(); p1Move = function (dir:String, maxSpeed:Number) { if (dir == "left" && _root.mcSpeed<maxSpeed) { _root.mcSpeed += .2; _root.mc._x -= _root.mcSpeed; } else if (dir == "right" && _root.mcSpeed<maxSpeed) { _root.mcSpeed += .2; _root.mc._x += _root.mcSpeed; } else if (dir == "left" && speed>=maxSpeed) { _root.mc._x -= _root.mcSpeed; } else if (dir == "right" && _root.mcSpeed>=maxSpeed) { _root.mc._x += _root.mcSpeed; } } //onEnterFrame for MC mc.onEnterFrame = function():Void { if (Key.isDown(Key.LEFT)) { if (_root.mcMoving == false && _root.mcSliding == false) { _root.mcMoving = true; } else if (_root.mcMoving == true && _root.mcSliding == false) { _root.p1Move("left",5); } } else if (!Key.isDown(Key.LEFT)) { if (_root.mcMoving == true && _root.mcSliding == false) { _root.mcSliding = true; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide<_root.mcMaxSlide) { _root.mcSlide += .2; this._x -= .2; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide>=_root.mcMaxSlide) { _root.mcMoving = false; _root.mcSliding = false; _root.mcSlide = 0; _root.mcSpeed = 0; } } else if (Key.isDown(Key.RIGHT)) { if (_root.mcMoving == false && _root.mcSliding == false) { _root.mcMoving = true; } else if (_root.mcMoving == true && _root.mcSliding == false) { _root.p1Move("right",5); } } else if (!Key.isDown(Key.RIGHT)) { if (_root.mcMoving == true && _root.mcSliding == false) { _root.mcSliding = true; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide<_root.mcMaxSpeed) { _root.mcSlide += .2; this._x += .2; } else if (_root.mcMoving == true && _root.mcSliding == true && _root.mcSlide>=_root.mcMax) { _root.mcMoving = false; _root.mcSliding = false; _root.mcSlide = 0; _root.mcSpeed = 0; } } }; I just don't get why when you press the left arrow its works completely fine, but when you press the right arrow it doesn't respond. It is literally the same code.

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • How to fix this Speech Recognition on C# wicked bug?

    - by aF
    Hello, I have this code in my C# project: public void startRecognition(string pName) { presentationName = pName; if (WaveNative.waveInGetNumDevs() > 0) { string grammar = System.Environment.GetEnvironmentVariable("PUBLIC") + "\\SoundLog\\Presentations\\" + presentationName + "\\SpeechRecognition\\soundlog.cfg"; if (File.Exists(grammar)) { File.Delete(grammar); } executeCommand(); /// Create an instance of SpSharedRecoContextClass which will be used /// to interface with the incoming audio stream recContext = new SpSharedRecoContextClass(); // Create the grammar object recContext.CreateGrammar(1, out recGrammar); //recContext.CreateGrammar(2, out recGrammar2); // Set up dictation mode //recGrammar2.SetDictationState(SpeechLib.SPRULESTATE.SPRS_ACTIVE); //recGrammar2.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); // Set appropriate grammar mode if (File.Exists(grammar)) { recGrammar.LoadCmdFromFile(grammar, SPLOADOPTIONS.SPLO_STATIC); //recGrammar.SetDictationState(SpeechLib.SPRULESTATE.SPRS_INACTIVE); recGrammar.SetGrammarState(SPGRAMMARSTATE.SPGS_ENABLED); recGrammar.SetRuleIdState(0, SPRULESTATE.SPRS_ACTIVE); } /// Bind a callback to the recognition event which will be invoked /// When a dictated phrase has been recognised. recContext.Recognition += new _ISpeechRecoContextEvents_RecognitionEventHandler(handleRecognition); // System.Windows.Forms.MessageBox.Show(recContext.ToString()); // gramática compilada } } private static void handleRecognition(int StreamNumber, object StreamPosition, SpeechLib.SpeechRecognitionType RecognitionType, SpeechLib.ISpeechRecoResult Result) { string temp = Result.PhraseInfo.GetText(0, -1, true); _recognizedText = ""; // System.Windows.Forms.MessageBox.Show(temp); // System.Windows.Forms.MessageBox.Show(recognizedWords.Count.ToString()); foreach (string word in recognizedWords) { if (temp.Contains(word)) { // System.Windows.Forms.MessageBox.Show("yes"); _recognizedText = word; } } } This codes generates a dll that I use in another application. Now, the wicked bug: - when I run the startRecognition method in the beginning of the execution of the other application, this codes works very well. But when I run it some time after the beginning, this codes works but the handleRecognition method is never called. I see that the words are recognized because they appear on the Microsoft Speech Recognition app, but the handler method is never called. Do you know what's the problem with this code? Thanks in advance :D

    Read the article

  • Not sure what happens to my apps objects when using NSURLSession in background - what state is my app in?

    - by Avner Barr
    More of a general question - I don't understand the workings of NSURLSession when using it in "background session mode". I will supply some simple contrived example code. I have a database which holds objects - such that portions of this data can be uploaded to a remote server. It is important to know which data/objects were uploaded in order to accurately display information to the user. It is also important to be able to upload to the server in a background task because the app can be killed at any point. for instance a simple profile picture object: @interface ProfilePicture : NSObject @property int userId; @property UIImage *profilePicture; @property BOOL successfullyUploaded; // we want to know if the image was uploaded to out server - this could also be a property that is queryable but lets assume this is attached to this object @end Now Lets say I want to upload the profile picture to a remote server - I could do something like: @implementation ProfilePictureUploader -(void)uploadProfilePicture:(ProfilePicture *)profilePicture completion:(void(^)(BOOL successInUploading))completion { NSUrlSession *uploadImageSession = ..... // code to setup uploading the image - and calling the completion handler; [uploadImageSession resume]; } @end Now somewhere else in my code I want to upload the profile picture - and if it was successful update the UI and the database that this action happened: ProfilePicture *aNewProfilePicture = ...; aNewProfilePicture.profilePicture = aImage; aNewProfilePicture.userId = 123; aNewProfilePicture.successfullyUploaded = NO; // write the change to disk [MyDatabase write:aNewProfilePicture]; // upload the image to the server ProfilePictureUploader *uploader = [ProfilePictureUploader ....]; [uploader uploadProfilePicture:aNewProfilePicture completion:^(BOOL successInUploading) { if (successInUploading) { // persist the change to my db. aNewProfilePicture.successfullyUploaded = YES; [Mydase update:aNewProfilePicture]; // persist the change } }]; Now obviously if my app is running then this "ProfilePicture" object is successfully uploaded and all is well - the database object has its own internal workings with data structures/caches and what not. All callbacks that may exist are maintained and the app state is straightforward. But I'm not clear what happens if the app "dies" at some point during the upload. It seems that any callbacks/notifications are dead. According to the API documentation- the uploading is handled by a separate process. Therefor the upload will continue and my app will be awakened at some point in the future to handle completion. But the object "aNewProfilePicture" is non existant at that point and all callbacks/objects are gone. I don't understand what context exists at this point. How am I supposed to ensure consistency in my DB and UI (For instance update the "successfullyUploaded" property for that user)? Do I need to re-work everything touching the DB or UI to correspond with the new API and work in a context free environment?

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • How do I use JDK 7 on Mac OSX?

    - by Yko
    OK. This is a newbie question but I can't figure it out... I would like to use the WatchService API as mentioned in this link: http://download.oracle.com/javase/tutorial/essential/io/notification.html After reading around, I found out that WatchService is part of the NIO class which is scheduled for JDK 7. So, it is in beta form. It's fine. http://jdk7.java.net/download.html has the JDK which I downloaded and extracted. I got a bunch of folders. I don't know what to do with them. Then, I read around some more and found that some nice group of people created JDK 7 as a binary so someone like me can install it easily. It is called Open JDK: http://code.google.com/p/openjdk-osx-build/ So, I downloaded the .dmg file and instal it. Then I open "Java Preference" and see that OpenJDK7 is available. So, now I feel that I can start trying out WatchService API. From the tutorial in the first link, the author gave a .java file to test it out first and make sure that it is running. Here is the link to the file: http://download.oracle.com/javase/tutorial/essential/io/examples/WatchDir.java So, I boot up Eclipse (actually I use STS) and create a new Java project and choose JaveSE-1.7 in the "use an execution environment JRE:". Under the src folder, I copy pasted the WatchDir.java file. And I still see tons of squiggly red lines. All the "import.java.nio.*" are all red and I cannot run it as a Java app. If you read this far, thanks a lot. So, now... What do I need to do? Thanks. EDIT: I actually did not pursue using Java 7 but there are a lot of interest in it and it seems like people keep answering this question. What should I do to make it more relevant to people who search for it? Let me know by PMing me. Thanks.

    Read the article

  • What's the fastest lookup algorithm for a pair data structure (i.e, a map)?

    - by truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 – 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeffrey
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • How can I obtain the IPv4 address of the client?

    - by Dr Dork
    Hello! I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code setup to listen for incoming TCP connection requests from clients after the parent socket has been created and is set to listen... int sockfd, newfd; unsigned int len; socklen_t sin_size; char msg[]="Test message sent"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char ip[INET6_ADDRSTRLEN]; . . //parent socket creation and listen code omitted for simplicity . //wait for connection requests from clients while(1) { //Returns the socketID and address of client connecting to socket if( ( newfd = accept(sockfd, (struct sockaddr *)&client, &len) ) == -1 ){ perror("Accept"); exit(-1); } if( (rv = recv(newfd, buf, MAXLEN-1, 0 )) == -1) { perror("Recv"); exit(-1); } struct sockaddr_in *clientAddr = ( struct sockaddr_in *) get_in_addr((struct sockaddr *)&client); inet_ntop(client.ss_family, clientAddr, ip, sizeof ip); printf("Receive from %s: query type is %s\n", ip, buf); if( ( st = send(newfd, msg, strlen(msg), 0)) == -1 ) { perror("Send"); exit(-1); } //ntohs is used to avoid big-endian and little endian compatibility issues printf("Send %d byte to port %d\n", ntohs(clientAddr->sin_port) ); close(newfd); } } I found the get_in_addr function online and placed it at the top of my code and use it to obtain the IP address of the client connecting... // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } but the function always returns the IPv6 IP address since thats what the sa_family property is set as. My question is, is the IPv4 IP address stored anywhere in the data I'm using and, if so, how can I access it? Thanks so much in advance for all your help!

    Read the article

  • How can I enable a debugging mode via a command-line switch for my Perl program?

    - by Michael Mao
    I am learning Perl in a "head-first" manner. I am absolutely a newbie in this language: I am trying to have a debug_mode switch from CLI which can be used to control how my script works, by switching certain subroutines "on and off". And below is what I've got so far: #!/usr/bin/perl -s -w # purpose : make subroutine execution optional, # which is depending on a CLI switch flag use strict; use warnings; use constant DEBUG_VERBOSE => "v"; use constant DEBUG_SUPPRESS_ERROR_MSGS => "s"; use constant DEBUG_IGNORE_VALIDATION => "i"; use constant DEBUG_SETPPING_COMPUTATION => "c"; our ($debug_mode); mainMethod(); sub mainMethod # () { if(!$debug_mode) { print "debug_mode is OFF\n"; } elsif($debug_mode) { print "debug_mode is ON\n"; } else { print "OMG!\n"; exit -1; } checkArgv(); printErrorMsg("Error_Code_123", "Parsing Error at..."); verbose(); } sub checkArgv #() { print ("Number of ARGV : ".(1 + $#ARGV)."\n"); } sub printErrorMsg # ($error_code, $error_msg, ..) { if(defined($debug_mode) && !($debug_mode =~ DEBUG_SUPPRESS_ERROR_MSGS)) { print "You can only see me if -debug_mode is NOT set". " to DEBUG_SUPPRESS_ERROR_MSGS\n"; die("terminated prematurely...\n") and exit -1; } } sub verbose # () { if(defined($debug_mode) && ($debug_mode =~ DEBUG_VERBOSE)) { print "Blah blah blah...\n"; } } So far as I can tell, at least it works...: the -debug_mode switch doesn't interfere with normal ARGV the following commandlines work: ./optional.pl ./optional.pl -debug_mode ./optional.pl -debug_mode=v ./optional.pl -debug_mode=s However, I am puzzled when multiple debug_modes are "mixed", such as: ./optional.pl -debug_mode=sv ./optional.pl -debug_mode=vs I don't understand why the above lines of code "magically works". I see both of the "DEBUG_VERBOS" and "DEBUG_SUPPRESS_ERROR_MSGS" apply to the script, which is fine in this case. However, if there are some "conflicting" debug modes, I am not sure how to set the "precedence of debug_modes"? Also, I am not certain if my approach is good enough to Perlists and I hope I am getting my feet in the right direction. One biggest problem is that I now put if statements inside most of my subroutines for controlling their behavior under different modes. Is this okay? Is there a more elegant way? I know there must be a debug module from CPAN or elsewhere, but I want a real minimal solution that doesn't depend on any other module than the "default". And I cannot have any control on the environment where this script will be executed...

    Read the article

  • COM port read - Thread remains alive after timeout occurs

    - by Sna
    Hello to all. I have a dll which includes a function called ReadPort that reads data from serial COM port, written in c/c++. This function is called within an extra thread from another WINAPI function using the _beginthreadex. When COM port has data to be read, the worker thread returns the data, ends normaly, the calling thread closes the worker's thread handle and the dll works fine. However, if ReadPort is called without data pending on the COM port, when timeout occurs then WaitForSingleObject returns WAIT_TIMEOUT but the worker thread never ends. As a result, virtual memory grows at about 1 MB every time, physical memory grows some KBs and the application that calls the dll becomes unstable. I also tryied to use TerminateThread() but i got the same results. I have to admit that although i have enough developing experience, i am not familiar with c/c++. I did a lot of research before posting but unfortunately i didn't manage to solve my problem. Does anyone have a clue on how could i solve this problem? However, I really want to stick to this kind of solution. Also, i want to mention that i think i can't use any global variables to use some kind of extra events, because each dll's functions may be called many times for every COM port. I post some parts of my code below: The Worker Thread: unsigned int __stdcall ReadPort(void* readstr){ DWORD dwError; int rres;DWORD dwCommModemStatus, dwBytesTransferred; int ret; char szBuff[64] = ""; ReadParams* params = (ReadParams*)readstr; ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD | EV_RING); if (ret == 0) { _endthreadex(0); return -1; } ret = WaitCommEvent(params->param2, &dwCommModemStatus, 0); if (ret == 0) { _endthreadex(0); return -2; } ret = SetCommMask(params->param2, EV_RXCHAR | EV_CTS | EV_DSR | EV_RLSD| EV_RING); if (ret == 0) { _endthreadex(0); return -3; } if (dwCommModemStatus & EV_RXCHAR||dwCommModemStatus & EV_RLSD) { rres = ReadFile(params->param2, szBuff, 64, &dwBytesTransferred,NULL); if (rres == 0) { switch (dwError = GetLastError()) { case ERROR_HANDLE_EOF: _endthreadex(0); return -4; } _endthreadex(0); return -5; } else { strcpy(params->param1,szBuff); _endthreadex(0); return 0; } } else { _endthreadex(0); return 0; } _endthreadex(0); return 0;} The Calling Thread: int WINAPI StartReadThread(HANDLE porthandle, HWND windowhandle){ HANDLE hThread; unsigned threadID; ReadParams readstr; DWORD ret, ret2; readstr.param2 = porthandle; hThread = (HANDLE)_beginthreadex( NULL, 0, ReadPort, &readstr, 0, &threadID ); ret = WaitForSingleObject(hThread, 500); if (ret == WAIT_OBJECT_0) { CloseHandle(hThread); if (readstr.param1 != NULL) // Send message to GUI return 0; } else if (ret == WAIT_TIMEOUT) { ret2 = CloseHandle(hThread); return -1; } else { ret2 = CloseHandle(hThread); if (ret2 == 0) return -2; }} Thank you in advance, Sna.

    Read the article

  • Repeater not repeating :0) (asp.net)(vb)

    - by Phil
    Morning stackoverflow, I have a repeater, with the following code in my aspx page; <asp:Repeater ID="Contactinforepeater" runat="server"> <HeaderTemplate> <h1>Contact Information</h1> </HeaderTemplate> <ItemTemplate> <table width="50%"> <tr> <td colspan="2"><%#Container.DataItem("position")%></td> </tr> <tr> <td>Name:</td> <td><%#Container.DataItem("surname")%></td> </tr> <tr> <td>Telephone:</td> <td><%#Container.DataItem("telephone")%></td> </tr> <tr> <td>Fax:</td> <td><%#Container.DataItem("fax")%></td> </tr> <tr> <td>Email:</td> <td><%#Container.DataItem("email")%></td> </tr> </table> </ItemTemplate> <SeparatorTemplate> <br /><hr /><br /> </SeparatorTemplate> </asp:Repeater> Then I have this code in my aspx.vb to get the data; If did = 0 Then s = "sql works on db server" x = New SqlCommand(s, c) x.Parameters.Add("@contentid", Data.SqlDbType.Int) x.Parameters("@contentid").Value = contentid c.Open() r = x.ExecuteReader r.Read() Contactinforepeater.DataSource = r Contactinforepeater.DataBind() End If c.Close() r.Close() If Not did = 0 Then s = "sql works on db server" x = New SqlCommand(s, c) x.Parameters.Add("@contentid", SqlDbType.Int) x.Parameters("@contentid").Value = contentid x.Parameters.Add("@did", SqlDbType.Int) x.Parameters("@did").Value = did c.Open() r = x.ExecuteReader r.Read() Contactinforepeater.DataSource = r Contactinforepeater.DataBind() End If r.Close() c.Close() If 'did' is or is not '0' I still get no data outputted to the page. I just get the 'contact information' h1 header from the header template. I've tested the value of s in sqlsms and it works fine. Position, surname, telephone, fax, email all exist in the db. The particular page I am checking exists and has 1 set of contact information attached. Where am I going wrong? Thanks! ps. Does my syntax appear correct? pps. I am also open to different ways of achieving the same result. I tried via an sqldatasource but ran into problems when using variables as params (there is no option to select them, only controls, querystring etc)

    Read the article

  • Problems with starting windows service on windows xp SP3

    - by Michiel Peeters
    I'm currently facing a problem which I can not resolve and I really don't know what to do anymore. When I'm trying to start the service I receive the message: "The service is started but again also stopped, this because that some of the services will stop if they have nothing to do, for example the performance logs and the alerts service". I've looked into the Windows Logs but nothing is written there which could describe why my service is all the time stopping. I've also tried to fire the windows service via the command prompt which gives me the message: "The service is not started, but the service didn't return any faults.". I've tried to remove all keys which references to my service, which didn't resolve the issue. I've searched on google (maybe not good enough) to find an answer but I didn't found any. I did found some websites which describes what I could do, but all of these suggestions didn't work. This is kinda ** because I do not know where to look. I do not have any error message, i do not have any id which i can use to search on. I really don't know where to start and I hope you guys can help me on this one. Detailed explanation about the windows service OS: Windows XP SP3 .Net Framework: .Net 4.0 Client Profile Language: C# Development environment: Visual Studio 2010 Professional (but Visual Studio 2012 RC is installed) Communications: WCF (Named Pipes), WCF (BasicHTTPBinding) Named Pipes: I have chosen for this solution because I wanted to communicate from a windows service to a windows form application. It worked now for quite some time but suddenly my windows service shuts it self down and I couldn't restart it anymore. There are two named pipes services implemented: An event service which will send any notification to the windows form application and an management service which gives my windows form application the possibility to maintain my windows service. BasicHTTPBinding: The basic http binding makes the connection to a central server. This connection is then used for streaming information from the client to the server. I do not know which additional information you will need, but if you guys need something then I'll try to give it as detailed as possible. Thank you in advance.

    Read the article

  • Alphabetically Ordering an array of words

    - by Genesis
    I'm studying C on my own in preparation for my upcoming semester of school and was wondering what I was doing wrong with my code so far. If Things look weird it is because this is part of a much bigger grab bag of sorting functions I'm creating to get a sense of how to sort numbers,letters,arrays,and the like! I'm basically having some troubles with the manipulation of strings in C currently. Also, I'm quite limited in my knowledge of C at the moment! My main Consists of this: #include <stdio.h> #include <stdio.h> #include <stdlib.h> int numbers[10]; int size; int main(void){ setvbuf(stdout,NULL,_IONBF,0); //This is magical code that allows me to input. int wordNumber; int lengthOfWord = 50; printf("How many words do you want to enter: "); scanf("%i", &wordNumber); printf("%i\n",wordNumber); char words[wordNumber][lengthOfWord]; printf("Enter %i words:",wordNumber); int i; for(i=0;i<wordNumber+1;i++){ //+1 is because my words[0] is blank. fgets(&words[i], 50, stdin); } for(i=1;i<wordNumber+1;i++){ // Same as the above comment! printf("%s", words[i]); //prints my words out! } alphabetize(words,wordNumber); //I want to sort these arrays with this function. } My sorting "method" I am trying to construct is below! This function is seriously flawed, but I'd thought I'd keep it all to show you where my mind was headed when writing this. void alphabetize(char a[][],int size){ // This wont fly. size = size+1; int wordNumber; int lengthOfWord; char sortedWords[wordNumber][lengthOfWord]; //In effort for the for loop int i; int j; for(i=1;i<size;i++){ //My effort to copy over this array for manipulation for(j=1;j<size;j++){ sortedWords[i][j] = a[i][j]; } } //This should be kinda what I want when ordering words alphabetically, right? for(i=1;i<size;i++){ for(j=2;j<size;j++){ if(strcmp(sortedWords[i],sortedWords[j]) > 0){ char* temp = sortedWords[i]; sortedWords[i] = sortedWords[j]; sortedWords[j] = temp; } } } for(i=1;i<size;i++){ printf("%s, ",sortedWords[i]); } } I guess I also have another question as well... When I use fgets() it's doing this thing where I get a null word for the first spot of the array. I have had other issues recently trying to scanf() char[] in certain ways specifically spacing my input word variables which "magically" gets rid of the first null space before the character. An example of this is using scanf() to write "Hello" and getting " Hello" or " ""Hello"... Appreciate any thoughts on this, I've got all summer to study up so this doesn't need to be answered with haste! Also, thank you stack overflow as a whole for being so helpful in the past. This may be my first post, but I have been a frequent visitor for the past couple of years and it's been one of the best places for helpful advice/tips.

    Read the article

  • How can I obtain the IP address of my server program?

    - by Dr Dork
    Hello! This question is related to another question I just posted. I'm prepping for a simple work project and am trying to familiarize myself with the basics of socket programming in a Unix dev environment. At this point, I have some basic server side code and client side code setup to communicate. Currently, my client code successfully connects to the server code and the server code sends it a test message, then both quit out. Perfect! That's exactly what I wanted to accomplish. Now I'm playing around with the functions used to obtain info about the two environments (server and client). I'd like to obtain my server program's IP address. Here's the code I currently have to do this, but it's not working... int sockfd; unsigned int len; socklen_t sin_size; char msg[]="test message"; char buf[MAXLEN]; int st, rv; struct addrinfo hints, *serverinfo, *p; struct sockaddr_storage client; char s[INET6_ADDRSTRLEN]; char ip[INET6_ADDRSTRLEN]; //zero struct memset(&hints,0,sizeof(hints)); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_flags = AI_PASSIVE; //get the server info if((rv = getaddrinfo(NULL, SERVERPORT, &hints, &serverinfo ) != 0)){ perror("getaddrinfo"); exit(-1); } // loop through all the results and bind to the first we can for( p = serverinfo; p != NULL; p = p->ai_next) { //Setup the socket if( (sockfd = socket( p->ai_family, p->ai_socktype, p->ai_protocol )) == -1 ) { perror("socket"); continue; } //Associate a socket id with an address to which other processes can connect if(bind(sockfd, p->ai_addr, p->ai_addrlen) == -1){ close(sockfd); perror("bind"); continue; } break; } if( p == NULL ){ perror("Fail to bind"); } inet_ntop(p->ai_family, get_in_addr((struct sockaddr *)p->ai_addr), s, sizeof(s)); printf("Server has TCP Port %s and IP Address %s\n", SERVERPORT, s); and the output for the IP is always empty... server has TCP Port 21412 and IP Address :: any ideas for what I'm missing? thanks in advance for your help! this stuff is really complicated at first.

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >