Search Results

Search found 94227 results on 3770 pages for 'common code'.

Page 703/3770 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • [C#][XNA 3.1] How can I host two different XNA windows inside one Windows Form?

    - by secutos
    I am making a Map Editor for a 2D tile-based game. I would like to host two XNA controls inside the Windows Form - the first to render the map; the second to render the tileset. I used the code here to make the XNA control host inside the Windows Form. This all works very well - as long as there is only one XNA control inside the Windows Form. But I need two - one for the map; the second for the tileset. How can I run two XNA controls inside the Windows Form? While googling, I came across the terms "swap chain" and "multiple viewports", but I can't understand them and would appreciate support. Just as a side note, I know the XNA control example was designed so that even if you ran 100 XNA controls, they would all share the same GraphicsDevice - essentially, all 100 XNA controls would share the same screen. I tried modifying the code to instantiate a new GraphicsDevice for each XNA control, but the rest of the code doesn't work. The code is a bit long to post, so I won't post it unless someone needs it to be able to help me. Thanks in advance.

    Read the article

  • share the same cookie between two website using PHP cURL extension

    - by powerboy
    I want to get the contents of some emails in my gmail account. I would like to use the PHP cURL extension to do this. I followed these steps in my first try: In the PHP code, output the contents of https://www.google.com/accounts/ServiceLoginAuth. In the browser, the user input username and password to login. In the PHP code, save cookies in a file named cookie.txt. In the PHP code, send request to https://mail.google.com/ along with cookies retrieved from cookie.txt and output the contents. The following code does not work: $login_url = 'https://www.google.com/accounts/ServiceLoginAuth'; $gmail_url = 'https://mail.google.com/'; $cookie_file = dirname(__FILE__) . '/cookie.txt'; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file); curl_setopt($ch, CURLOPT_URL, $login_url); $output = curl_exec($ch); echo $output; curl_setopt($ch, CURLOPT_URL, $gmail_url); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file); $output = curl_exec($ch); echo $output; curl_close($ch);

    Read the article

  • How do I create JavaScript escape sequences in PHP?

    - by ordinarytoucan
    I'm looking for a way to create valid UTF-16 JavaScript escape sequence characters (including surrogate pairs) from within PHP. I'm using the code below to get the UTF-32 code points (from a UTF-8 encoded character). This works as JavaScript escape characters (eg. '\u00E1' for 'á') - until you get into the upper ranges where you get surrogate pairs (eg '??' comes out as '\u1D715' but should be '\uD835\uDF15')... function toOrdinal($chr) { if (ord($chr{0}) >= 0 && ord($chr{0}) <= 127) { return ord($chr{0}); } elseif (ord($chr{0}) >= 192 && ord($chr{0}) <= 223) { return (ord($chr{0}) - 192) * 64 + (ord($chr{1}) - 128); } elseif (ord($chr{0}) >= 224 && ord($chr{0}) <= 239) { return (ord($chr{0}) - 224) * 4096 + (ord($chr{1}) - 128) * 64 + (ord($chr{2}) - 128); } elseif (ord($chr{0}) >= 240 && ord($chr{0}) <= 247) { return (ord($chr{0}) - 240) * 262144 + (ord($chr{1}) - 128) * 4096 + (ord($chr{2}) - 128) * 64 + (ord($chr{3}) - 128); } elseif (ord($chr{0}) >= 248 && ord($chr{0}) <= 251) { return (ord($chr{0}) - 248) * 16777216 + (ord($chr{1}) - 128) * 262144 + (ord($chr{2}) - 128) * 4096 + (ord($chr{3}) - 128) * 64 + (ord($chr{4}) - 128); } elseif (ord($chr{0}) >= 252 && ord($chr{0}) <= 253) { return (ord($chr{0}) - 252) * 1073741824 + (ord($chr{1}) - 128) * 16777216 + (ord($chr{2}) - 128) * 262144 + (ord($chr{3}) - 128) * 4096 + (ord($chr{4}) - 128) * 64 + (ord($chr{5}) - 128); } } How do I adapt this code to give me proper UTF-16 code points? Thanks!

    Read the article

  • Simple Enterprise Library console application refuses to compile

    - by Vadim
    I just downloaded and installed Microsoft Enterprise Library 5.0. I fired up VS 2010 to play with EL 5 and created a very simple console application. However, it would not compile. I got the following error: The type or namespace name 'Data' does not exist in the namespace 'Microsoft.Practices.EnterpriseLibrary' (are you missing an assembly reference?) I added Microsoft.Practices.EnterpriseLibrary.Common, Microsoft.Practices.EnterpriseLibrary.Data, and Microsoft.Practices.Unity references to my project. Here's the simple code that refuses to compile. using Microsoft.Practices.EnterpriseLibrary.Common.Configuration.Unity; using Microsoft.Practices.EnterpriseLibrary.Data; using Microsoft.Practices.Unity; namespace EntLib { class Program { static void Main(string[] args) { IUnityContainer container = new UnityContainer(); container.AddNewExtension<EnterpriseLibraryCoreExtension>(); var defaultDatabase = container.Resolve<Database>(); } } } The error above complains about line #2 : using Microsoft.Practices.EnterpriseLibrary.Data; Someone probably will point out to a stupid mistake by me, but at the moment I fail to see it. I tried to remove and add again Microsoft.Practices.EnterpriseLibrary.Data to refences but it didn't help.

    Read the article

  • Properties vs. Fields: Need help grasping the uses of Properties over Fields.

    - by pghtech
    First off, I have read through a list of postings on this topic and I don't feel I have grasped properties because of what I had come to understand about encapsulation and field modifiers (private, public..ect). One of the main aspects of C# that I have come to learn is the importance of data protection within your code by the use of encapsulation. I 'thought' I understood that to be because of the ability of the use of the modifiers (private, public, internal, protected). However, after learning about properties I am sort of torn in understanding not only properties uses, but the overall importance/ability of data protection (what I understood as encapsulation) within C#. To be more specific, everything I have read when I got to properties in C# is that you should try to use them in place of fields when you can because of: 1) they allow you to change the data type when you can't when directly accessing the field directly. 2) they add a level of protection to data access However, from what I 'thought' I had come to know about the use of field modifiers did #2, it seemed to me that properties just generated additional code unless you had some reason to change the type (#1) - because you are (more or less) creating hidden methods to access fields as opposed to directly. Then there is the whole modifiers being able to be added to Properties which further complicates my understanding for the need of properties to access data. I have read a number of chapters from different writers on "properties" and none have really explained a good understanding of properties vs. fields vs. encapsulation (and good programming methods). Can someone explain: 1) why I would want to use properties instead of fields (especially when it appears I am just adding additional code 2) any tips on recognizing the use of properties and not seeing them as simply methods (with the exception of the get;set being apparent) when tracing other peoples code? 3) Any general rules of thumb when it comes to good programming methods in relation to when to use what? Thanks and sorry for the long post - I didn't want to just ask a question that has been asked 100x without explaining why I am asking it again.

    Read the article

  • How to record different authentication types (username / password vs token based) in audit log

    - by RM
    I have two types of users for my system, normal human users with a username / password, and delegation authorized accounts through OAuth (i.e. using a token identifier). The information that is stored for each is quite different, and are managed by different subsytems. They do however interact with the same tables / data within the system, so I need to maintain the audit trail regardless of whether human user, or token-based user modified the data. My solution at the moment is to have a table called something like AuditableIdentity, and then have the two types inheriting off that table (either in the single table, or as two seperate tables with 1 to 1 PK with AuditableIdentity. All operations would use the common AuditableIdentity PK for CreatedBy, ModifiedBy etc columns. There isn't any FK constraint on the audit columns, so any text can go in there, but I want an easy way to easily determine whether it was a human or system that made the change, and joining to the one AuditableIdentity table seems like a clean way to do that? Is there a best practice for this scenario? Is this an appropriate way of approaching the problem - or would you not bother with the common table and just rely on joins (to the two seperate un-related user / token tables) later to determine which user type matches which audit records?

    Read the article

  • restrict documents for mapreduce with mongoid

    - by theBernd
    I implemented the pearson product correlation via map / reduce / finalize. The missing part is to restrict the documents (representing users) to be processed via a filter query. For a simple query like mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name => 'Bernd' }) I get this to work. But my filter criteria is a little bit more complicated: I have one set of preferences which need to have at least one common element and another set of preferences which may not have a common element. In a later step I also want to restrict this to documents (users) within a certain geographical distance. Currently I have this code working in my map function, but I would prefer to separate this into either query params as supported by mongoid or a javascript function. All my attempts to solve this failed since the code is either ignored or raises an error. I did a couple of tests. A regular find like User.where(:name.in => ['Arno', 'Bernd', 'Claudia']) works and returns #<Mongoid::Criteria:0x00000101f0ea40 @selector={:name=>{"$in"=>["Arno", "Bernd", "Claudia"]}}, @options={}, @klass=User, @documents=[]> Trying the same with mapreduce User.collection. mapreduce(mapper, reducer, :finalize => finalizer, :query => { :name.in => ['Arno', 'Bernd', 'Claudia'] }) fails with `serialize': keys must be strings or symbols (TypeError) in bson-1.1.5 The intermediate query parameter looks like this :query=>{#<Mongoid::Criterion::Complex:0x00000101a209e8 @key=:name, @operator="in">=>["Arno", "Bernd", "Claudia"]} and at least @operator looks a bit weird to me. I'm also uncertain if the class name can be omitted. BTW - I'm using mongodb 1.6.5-x86_64, and the mongoid 2.0.0.beta.20, mongo 1.1.5 and bson 1.1.5 gems on MacOS. What am I doing wrong? Thanks in advance.

    Read the article

  • Cocoa NSOutlineView and Drag-and-Drop

    - by bobthecoder
    I recently started another thread without an account, so I'm reposting the question here with an account so I can edit current links to the program so other users can follow this. I have also updated the code below. Here is my original question: I read the other post here on Outlineviews and DND, but I can't get my program to work. At the bottom of this post is a link to a zip of my project. Its very basic with only an outlineview and button. I want it to receive text files being dropped on it, but something is wrong with my code or connections. I tried following Apple's example code of their NSOutline Drag and Drop, but I'm missing something. 1 difference is my program is a document based program and their example isn't. I set the File's Owner to receive delegate actions, since that's where my code to handle drag and drop is, as well as a button action. Its probably a simple mistake, so could someone please look at it and tell me what I'm doing wrong? Here is a link to the file: http://dl.dropbox.com/u/7195844/OutlineDragDrop1.zip

    Read the article

  • How do I setup NInject? (i.e.

    - by Greg
    Hi, I'm getting confused in the doco how I should be setting up Ninject. I'm seeing different ways of doing it, some v2 versus v1 confusion probably included... Question - What is the best way in my WinForms application to set things up for NInject (i.e. what are the few lines of code required). I'm assuming this would go into the MainForm Load method. In other words what code do I have to have prior to getting to: Bind<IWeapon>().To<Sword>(); I have the following code, so effectively I just want to get clarification on the setup and bind code that would be required in my MainForm.Load() to end up with a concrete Samurai instance? internal interface IWeapon { void Hit(string target); } class Sword : IWeapon { public void Hit(string target) { Console.WriteLine("Chopped {0} clean in half", target); } } class Samurai { private IWeapon _weapon; [Inject] public Samurai(IWeapon weapon) { _weapon = weapon; } public void Attack(string target) { _weapon.Hit(target); } } thanks

    Read the article

  • Best way to store list of numbers and to retrieve them

    - by bingoNumbers
    Hi. What is the best way to store a list of random numbers (like lotto/bingo numbers) and retrieve them? I'd like to store on a Database a number of rows, where each row contains 5-10 numbers ranging from 0 to 90. I will store a big number of those rows. What I'd like to be able is to retrieve the rows that have at least X number in common to a newly generated row. Example: [3,4,33,67,85,99] [55,56,77,89,98,99] [3,4,23,47,85,91] Those are on the DB I will generate this: [1,2,11,45,47,88] and now I want to get the rows that have at least 1 number in common with this one. The easiest (and dumbest?) way is to make 6 select and check for similar results. I thought to store numbers with a large binary string like 000000000000000000000100000000010010110000000000000000000000000 with 99 numbers where each number represent a number from 1 to 99, so if I have 1 at the 44th position, it means that I have 44 on that row. This method is probably shifting the difficult tasks to the Db but it's again not very smart. Any suggestion?

    Read the article

  • Copy vector of values to vector of pairs in one line

    - by Kirill V. Lyadvinsky
    I have the following types: struct X { int x; X( int val ) : x(val) {} }; struct X2 { int x2; X2() : x2() {} }; typedef std::pair<X, X2> pair_t; typedef std::vector<pair_t> pairs_vec_t; typedef std::vector<X> X_vec_t; I need to initialize instance of pairs_vec_t with values from X_vec_t. I use the following code and it works as expected: int main() { pairs_vec_t ps; X_vec_t xs; // this is not empty in the production code ps.reserve( xs.size() ); { // I want to change this block to one line code. struct get_pair { pair_t operator()( const X& value ) { return std::make_pair( value, X2() ); } }; std::transform( xs.begin(), xs.end(), back_inserter(ps), get_pair() ); } return 0; } What I'm trying to do is to reduce my copying block to one line with using boost::bind. This code is not working: for_each( xs.begin(), xs.end(), boost::bind( &pairs_vec_t::push_back, ps, boost::bind( &std::make_pair, _1, X2() ) ) ); I know why it is not working, but I want to know how to make it working without declaring extra functions and structs?

    Read the article

  • How to use a int2 database-field as a boolean in Java using JPA/Hibernate

    - by mg
    Hello... I write an application based on an already existing database (postgreSQL) using JPA and Hibernate. There is a int2-column (activeYN) in a table, which is used as a boolean (0 = false (inactive), not 0 = true (active)). In the Java application i want to use this attribute as a boolean. So i defined the attribute like this: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private boolean active; @Column(name="activeYN") public boolean isActive() { return this.active; } /* .... some other Code ... */ } But there ist an exception because Hibernate expects an boolean database-field and not an int2. Can i do this mapping i any way while using a boolean in java?? I have a possible solution for this, but i don't really like it: My "hacky"-solution is the following: @Entity public class ModelClass implements Serializable { /*..... some Code .... */ private short active_USED_BY_JPA; //short because i need int2 /** * @Deprecated this method is only used by JPA. Use the method isActive() */ @Column(name="activeYN") public short getActive_USED_BY_JPA() { return this.active_USED_BY_JPA; } /** * @Deprecated this method is only used by JPA. * Use the method setActive(boolean active) */ public void setActive_USED_BY_JPA(short active) { this.active_USED_BY_JPA = active; } @Transient //jpa will ignore transient marked methods public boolean isActive() { return getActive_USED_BY_JPA() != 0; } @Transient public void setActive(boolean active) { this.setActive_USED_BY_JPA = active ? -1 : 0; } /* .... some other Code ... */ } Are there any other solutions for this problem? The "hibernate.hbm2ddl.auto"-value in the hibernate configuration is set to "validate". (sorry, my english is not the best, i hope you understand it anyway)..

    Read the article

  • Event OnClick for a button in a custom notification

    - by Simone
    I have a custom notification with a button. To set the notification and use the event OnClick on my button I've used this code: //Notification and intent of the notification Notification notification = new Notification(R.drawable.stat_notify_missed_call, "Custom Notification", System.currentTimeMillis()); Intent mainIntent = new Intent(getBaseContext(), NotificationActivity.class); PendingIntent pendingMainIntent = PendingIntent.getActivity(getBaseContext(), 0, mainIntent , 0); notification.contentIntent = pendingMainIntent; //Remoteview and intent for my button RemoteViews notificationView = new RemoteViews(getBaseContext().getPackageName(), R.layout.remote_view_layout); Intent activityIntent = new Intent(Intent.ACTION_CALL, Uri.parse("tel:190")); PendingIntent pendingLaunchIntent = PendingIntent.getActivity(getBaseContext(), 0, activityIntent, PendingIntent.FLAG_UPDATE_CURRENT); notificationView.setOnClickPendingIntent(R.id.button1, pendingLaunchIntent); notification.contentView = notificationView; notificationManager.notify(CUSTOM_NOTIFICATION_ID, notification); With this code I've a custom notification with my custom layout...but I can't click the button! every time I try to click the button I click the entire notification and so the script launch the "mainIntent" instead of "activityIntent". I have read in internet that this code doesn't work on all terminals. I have tried it on the emulator and on an HTC Magic but I have always the same problem: I can't click the button! My code is right? someone can help me? Thanks, Simone

    Read the article

  • datepicker value is blank when disabled, jquery

    - by Mithil Deshmukh
    Hi. I'm fairly new to jQuery. I have a Jquery datepicker in a user control. I have added a "disable" property to the datepicker. Whenever I save the page(having this usercontrol) the datepicker with disable set to true is empty. All other datepickers save fine. Here is my code. ASPX < USERCONTROL:DATEPICKER id="dpBirthDate" startyear="1980" runat="server" Disable=true ASCX < input type="text" size="8" runat="server" id="txtDate" name="txtDate" onblur="ValidateForm(this.id);" / ASCX Code Behind Public Property Disable() As Boolean Get Return (txtDate.Disabled = True) End Get Set(ByVal bValue As Boolean) If (bValue = True) Then txtDate.Attributes.Add("Disabled", "True") Else txtDate.Attributes.Remove("Disabled") End If End Set End Property My Jquery $(document).ready(function() { $("input[id$=txtDate]").datepicker({ showOn: 'button', buttonImage: '<%=ConfigurationSettings.AppSettings("BASE_DIRECTORY")%>/Images/el-calendar.gif', buttonImageOnly: true }); $("input[id$=txtDate]").mask("99/99/9999", { placeholder: " " }); //Disable datepicker if "disable=true" $("input[id$=txtDate]").each(function() { if ($("input[id$=" + this.id + "]").attr("Disabled") == "True") { $("input[id$=" + this.id + "]").datepicker("disable"); } else if ($("input[id$=" + this.id + "]").attr("Disabled") == "False") { $("input[id$=" + this.id + "]").datepicker("enable"); } }); }); I am sorry, I am not sure how to format the code here. I apologies for the cluttered code. Can anybody tell me why the datepicker value is empty when it is disabled but works fine otherwise? Thanks is advance.

    Read the article

  • Unable to access Java-created file -- sometimes

    - by BlairHippo
    In Java, I'm working with code running under WinXP that creates a file like this: public synchronized void store(Properties props, byte[] data) { try { File file = filenameBasedOnProperties(props); if ( file.exists() ) { return; } File temp = File.createTempFile("tempfile", null); FileOutputStream out = new FileOutputStream(temp); out.write(data); out.flush(); out.close(); file.getParentFile().mkdirs(); temp.renameTo(file); } catch (IOException ex) { // Complain and whine and stuff } } Sometimes, when a file is created this way, it's just about totally inaccessible from outside the code (though the code responsible for opening and reading the file has no problem), even when the application isn't running. When accessed via Windows Explorer, I can't move, rename, delete, or even open the file. Under Cygwin, I get the following when I ls -l the directory: ls: cannot access [big-honkin-filename] total 0 ?????????? ? ? ? ? ? [big-honkin-filename] As implied, the filenames are big, but under the 260-character max for XP (though they are slightly over 200 characters). To further add to the sense the my computer just wants me to feel stupid, sometimes the files created by this code are perfectly normal. The only pattern I've spotted is that once one file in the directory "locks", the rest are screwed. Anybody ever run into something like this before, or have any insights into what's going on here?

    Read the article

  • How much of STL is too much?

    - by Darius Kucinskas
    I am using a lot of STL code with std::for_each, bind, and so on, but I noticed that sometimes STL usage is not good idea. For example if you have a std::vector and want to do one action on each item of the vector, your first idea is to use this: std::for_each(vec.begin(), vec.end(), Foo()) and it is elegant and ok, for a while. But then comes the first set of bug reports and you have to modify code. Now you should add parameter to call Foo(), so now it becomes: std::for_each(vec.begin(), vec.end(), std::bind2nd(Foo(), X)) but that is only temporary solution. Now the project is maturing and you understand business logic much better and you want to add new modifications to code. It is at this point that you realize that you should use old good: for(std::vector::iterator it = vec.begin(); it != vec.end(); ++it) Is this happening only to me? Do you recognise this kind of pattern in your code? Have you experience similar anti-patterns using STL?

    Read the article

  • Reference 3.5 assembly from 4.0 winforms phail

    - by Dean Lunz
    So I have this utility library that is compiled as a dll under .net 3.5 and it is used by my asp.net 3.5 website. I created a .net 4.0 winforms app to push data onto the website. I want to make use of the functionality in the utilities library from this winforms app. The problem lies in that when I make reference to the utilities library and use the code in it intellisense barks at me saying that it can't find the objects in that library. The thing is I would switch the winforms app to 3.5 which fixes the problem, but I am using Tasks which require 4.0. And because my website and utilities library both run 3.5 and my website is hosted at godaddy that currently only supports asp.net 3.5 so compiling my utilities library under 4.0 for my winforms app is not going to work because it breaks my website. I have tried the app.config trick ala useLegacyV2RuntimeActivationPolicy="true" ... But that did not help. Obviously I could start a new utilities project for 4.0 and and copy the code files from the existing utilities library then reference the new 4.0 utilities library in my winforms app but, that strikes me as being rather overkill when all I want to do is reference the library and use it's functionality. Not to mention that I would have two utility libraries both containing the exact same code, and if I update the code in one I will need to make sure that the other is also updated. I could use add file as link, but you get the idea. So is there anything else I could try or any other way to solve or get around this? Or am I just going to have to break down and create a identical clone of the utilities library for 4.0.

    Read the article

  • TransactionScope Prematurely Completed

    - by Chris
    I have a block of code that runs within a TransactionScope and within this block of code I make several calls to the DB. Selects, Updates, Creates, and Deletes, the whole gamut. When I execute my delete I execute it using an extension method of the SqlCommand that will automatically resubmit the query if it deadlocks as this query could potentially hit a deadlock. I believe the problem occurs when a deadlock is hit and the function tries to resubmit the query. This is the error I receive: The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements. This is the simple code that executes the query (all of the code below executes within the using of the TransactionScope): using (sqlCommand.Connection = new SqlConnection(ConnectionStrings.App)) { sqlCommand.Connection.Open(); sqlCommand.ExecuteNonQueryWithDeadlockHandling(); } Here is the extension method that resubmits the deadlocked query: public static class SqlCommandExtender { private const int DEADLOCK_ERROR = 1205; private const int MAXIMUM_DEADLOCK_RETRIES = 5; private const int SLEEP_INCREMENT = 100; public static void ExecuteNonQueryWithDeadlockHandling(this SqlCommand sqlCommand) { int count = 0; SqlException deadlockException = null; do { if (count > 0) Thread.Sleep(count * SLEEP_INCREMENT); deadlockException = ExecuteNonQuery(sqlCommand); count++; } while (deadlockException != null && count < MAXIMUM_DEADLOCK_RETRIES); if (deadlockException != null) throw deadlockException; } private static SqlException ExecuteNonQuery(SqlCommand sqlCommand) { try { sqlCommand.ExecuteNonQuery(); } catch (SqlException exception) { if (exception.Number == DEADLOCK_ERROR) return exception; throw; } return null; } } The error occurs on the line that executes the nonquery: sqlCommand.ExecuteNonQuery();

    Read the article

  • Getting Started with Ruby & Ruby on Rails

    - by JakeTheSnake
    Some background: I'm a jack-of-all traits, one of which is programming. I learned VB6 through Excel and PHP for creating websites and so far it's worked out just fine for me. I'm not CS major or even mathematically inclined - logic is what interests me. Current status: I'm willing to learn new and more powerful languages; my first foray into such a route is learning Ruby. I went to the main Ruby website and did the interactive intro. (by the way, I'm currently getting redirected to google.com when I try the link...it's happening to other websites as well...is my computer infected?) I liked what I learned and wanted to get started using Ruby to create websites. I downloaded InstantRails and installed it; everything so far has been fine - the program starts up just fine, and I can test some Ruby code in the console. However my troubles begin when I try and view a web page with Ruby code present. Lastly, my problem: As in PHP, I can browse to the .php file directly and through using PHP tags and some simple 'echo' statements I can be on my way in making dynamic web pages. However with the InstantRails app working, accessing a .rb or .rhtml page doesn't produce similar results. I made a simple text file named 'test.rb' and put basic HTML tags in there (html, head, body) and the Ruby tags <%= and % with some ruby code inside. The web page actually shows the tags and the code - as if it's all just plain HTML. I take it Ruby isn't parsing the page before it is displayed to the user, but this is where my lack of understanding of the Ruby environment stops me short. Where do I go from here?

    Read the article

  • jQuery error when aborting an ajax call only in Internet Explorer

    - by Rob Crowell
    When mousing over an image in a jcarousel, my site displays a popup whose contents are loaded via ajax. I'm doing what I thought was fairly straightforward; keeping a handle to the xhrRequest object, and aborting the existing one before making a new request. It works great in all browsers except IE, where I receive an error "Object doesn't support this property or method" Here's the code that is triggering it: function showPopup { // ... code snipped ... // cancel the existing xhr request if (showPopup.xhrRequest != null) { showPopup.xhrRequest.abort(); showPopup.xhrRequest = null; } showPopup.xhrRequest = $.ajax({url: url, type: "GET", success:function(data) { $("#popup-content").html(data); } }); // ... code snipped ... } showPopup.xhrRequest = null; Works great in Firefox and Chrome. I traced the error down to this bit of code in jquery.js inside the ajax function (line 5233 in my copy of jQuery): // Override the abort handler, if we can (IE doesn't allow it, but that's OK) // Opera doesn't fire onreadystatechange at all on abort try { var oldAbort = xhr.abort; xhr.abort = function() { if (xhr ) { oldAbort.call( xhr ); } onreadystatechange( "abort" ); } catch(e) { } The specific error occurs on the oldAbort.call( xhr ) line. Any ideas?

    Read the article

  • calling .ajax() from an eventHandler c# asp.ent

    - by ibininja
    Good day...! In the code behind (upload.aspx) I have an event that returns the number of bytes being streamed; and as I debug it, it works fine. I wanted to reflect the numbers returned from the eventHandler on a progress bar and this is where I got lost. I tried using jQuery's .ajax() function. this is how I implemented it: In the EventHandler in my code behind I added this code to call the .ajax() function: Page.ClientScript.RegisterStartupScript(this.GetType(), "UpdateProgress", "<script type='text/javascript'>updateProgress();</script>"); My plan is whenever the eventHandler function changes the values of bytes being streamed it calls the javascript function "updateProgress()" The .ajax() function "UpdateProgress()" is as: function updateProgress() { $.ajax({ type: "POST", url: "upload.aspx/GetData", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", async: true, success: function (msg) { $("#progressbar").progressbar("option", "value", msg.d); } }); } I made sure that the function GetData() is [System.Web.Services.WebMethod] and that it is static as well. so the workflow of what I am trying to implement is as: - Click On Upload button - The Behind code starts executing and EventHandler triggers - The EventHandler calls .ajax() function - The .ajax() function retrieves the bytes being streamed and updates progress bar. When I ran the code; all runs well except that the .ajax() is only executed when upload is finished (and progress bar also updates only when finished upload); even though I call .ajax() function every time in the eventHandler function as reflected above... What am I doing wrong? Am I thinking of this right? is there anything else I should add maybe an updatePanel or something? thank you

    Read the article

  • How to read/write high-resolution (24-bit, 8 channel) .wav files in Java?

    - by dB'
    I'm trying to write a Java application that manipulates high resolution .wav files. I'm having trouble importing the audio data, i.e. converting the .wav file into an array of doubles. When I use a standard approach an exception is thrown. AudioFileFormat as = AudioSystem.getAudioFileFormat(new File("orig.wav")); --> javax.sound.sampled.UnsupportedAudioFileException: file is not a supported file type Here's the file format info according to soxi: dB$ soxi orig.wav soxi WARN wav: wave header missing FmtExt chunk Input File : 'orig.wav' Channels : 8 Sample Rate : 96000 Precision : 24-bit Duration : 00:00:03.16 = 303526 samples ~ 237.13 CDDA sectors File Size : 9.71M Bit Rate : 24.6M Sample Encoding: 32-bit Floating Point PCM Can anyone suggest the simplest method for getting this audio into Java? I've tried using a few techniques. As stated above, I've experimented with the Java AudioSystem (on both Mac and Windows). I've also tried using Andrew Greensted's WavFile class, but this also fails (WavFileException: Compression Code 3 not supported). One workaround is to convert the audio to 16 bits using sox (with the -b 16 flag), but this is suboptimal since it increases the noise floor. Incidentally, I've noticed that the file CAN be read by libsndfile. Is my best bet to write a jni wrapper around libsndfile, or can you suggest something quicker? Note that I don't need to play the audio, I just need to analyze it, manipulate it, and then write it out to a new .wav file. * UPDATE * I solved this problem by modifying Andrew Greensted's WavFile class. His original version only read files encoded as integer values ("format code 1"); my files were encoded as floats ("format code 3"), and that's what was causing the problem. I'll post the modified version of Greensted's code when I get a chance. In the meantime, if anyone wants it, send me a message.

    Read the article

  • segfault during __cxa_allocate_exception in SWIG wrapped library

    - by lefticus
    While developing a SWIG wrapped C++ library for Ruby, we came across an unexplained crash during exception handling inside the C++ code. I'm not sure of the specific circumstances to recreate the issue, but it happened first during a call to std::uncaught_exception, then after a some code changes, moved to __cxa_allocate_exception during exception construction. Neither GDB nor valgrind provided any insight into the cause of the crash. I've found several references to similar problems, including: http://wiki.fifengine.de/Segfault_in_cxa_allocate_exception http://forums.fifengine.de/index.php?topic=30.0 http://code.google.com/p/osgswig/issues/detail?id=17 https://bugs.launchpad.net/ubuntu/+source/libavg/+bug/241808 The overriding theme seems to be a combination of circumstances: A C application is linked to more than one C++ library More than one version of libstdc++ was used during compilation Generally the second version of C++ used comes from a binary-only implementation of libGL The problem does not occur when linking your library with a C++ application, only with a C application The "solution" is to explicitly link your library with libstdc++ and possibly also with libGL, forcing the order of linking. After trying many combinations with my code, the only solution that I found that works is the LD_PRELOAD="libGL.so libstdc++.so.6" ruby scriptname option. That is, none of the compile-time linking solutions made any difference. My understanding of the issue is that the C++ runtime is not being properly initialized. By forcing the order of linking you bootstrap the initialization process and it works. The problem occurs only with C applications calling C++ libraries because the C application is not itself linking to libstdc++ and is not initializing the C++ runtime. Because using SWIG (or boost::python) is a common way of calling a C++ library from a C application, that is why SWIG often comes up when researching the problem. Is anyone out there able to give more insight into this problem? Is there an actual solution or do only workarounds exist? Thanks.

    Read the article

  • Python - Open default mail client using mailto, with multiple recipients

    - by victorhooi
    Hi, I'm attempting to write a Python function to send an email to a list of users, using the default installed mail client. I want to open the email client, and give the user the opportunity to edit the list of users or the email body. I did some searching, and according to here: http://www.sightspecific.com/~mosh/WWW_FAQ/multrec.html It's apparently against the RFC spec to put multiple comma-delimited recipients in a mailto link. However, that's the way everybody else seems to be doing it. What exactly is the modern stance on this? Anyhow, I found the following two sites: http://2ality.blogspot.com/2009/02/generate-emails-with-mailto-urls-and.html http://www.megasolutions.net/python/invoke-users-standard-mail-client-64348.aspx which seem to suggest solutions using urllib.parse (url.parse.quote for me), and webbrowser.open. I tried the sample code from the first link (2ality.blogspot.com), and that worked fine, and opened my default mail client. However, when I try to use the code in my own module, it seems to open up my default browser, for some weird reason. No funny text in the address bar, it just opens up the browser. The email_incorrect_phone_numbers() function is in the Employees class, which contains a dictionary (employee_dict) of Employee objects, which themselves have a number of employee attributes (sn, givenName, mail etc.). Full code is actually here (http://stackoverflow.com/questions/2963975/python-converting-csv-to-objects-code-design) from urllib.parse import quote import webbrowser .... def email_incorrect_phone_numbers(self): email_list = [] for employee in self.employee_dict.values(): if not PhoneNumberFormats.standard_format.search(employee.telephoneNumber): print(employee.telephoneNumber, employee.sn, employee.givenName, employee.mail) email_list.append(employee.mail) recipients = ', '.join(email_list) webbrowser.open("mailto:%s?subject=%s&body=%s" % (recipients, quote("testing"), quote('testing')) ) Any suggestions? Cheers, Victor

    Read the article

  • Python subprocess Popen.communicate() equivalent to Popen.stdout.read()?

    - by Christophe
    Very specific question (I hope): What are the differences between the following three codes? (I expect it to be only that the first does not wait for the child process to be finished, while the second and third ones do. But I need to be sure this is the only difference...) I also welcome other remarks/suggestions (though I'm already well aware of the shell=True dangers and cross-platform limitations) Note that I already read Python subprocess interaction, why does my process work with Popen.communicate, but not Popen.stdout.read()? and that I do not want/need to interact with the program after. Also note that I already read Alternatives to Python Popen.communicate() memory limitations? but that I didn't really get it... First code: from subprocess import Popen, PIPE def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Second code: from subprocess import Popen, PIPE from subprocess import communicate def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) (stdout, stderr) = process.communicate() return process, stderr, stdout Third code: from subprocess import Popen, PIPE from subprocess import wait def exe_f(command='ls -l', shell=True): "Function to execute a command and return stuff" process = Popen(command, shell=shell, stdout=PIPE, stderr=PIPE) code = process.wait() stdout = process.stdout.read() stderr = process.stderr.read() return process, stderr, stdout Thanks.

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >