Search Results

Search found 9954 results on 399 pages for 'mean'.

Page 345/399 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Where are the network boundaries in the Java Connector Architecture (JCA)?

    - by Laird Nelson
    I am writing a JCA resource adapter. I'm also, as I go, trying to fully understand the connection management portion of the JCA specification. As a thought experiment, pretend that the only client of this adapter will be a Swing Java Application Client located on a different machine. Also assume that the resource adapter will communicate with its "enterprise information system" (EIS) over the network as well. As I understand the JCA specification, the .rar file is deployed to the application server. The application server creates the .rar file's implementation of the ManagedConnectionFactory interface. It then asks it to produce a connection factory, which is the opaque object that is deployed to JNDI for the user to use to obtain a connection to the resource. (In the case of JDBC, the connection factory is a javax.sql.DataSource.) It is a requirement that the connection factory retain a reference to the application-server-supplied ConnectionManager, which, in turn, is required to be Serializable. This makes sense--in order for the connection factory to be stored in JNDI, it must be serializable, and in order for it to keep a reference to the ConnectionManager, the ConnectionManager must also be serializable. So fine, this little object graph gets installed in the application client's JNDI tree. This is where I start to get queasy. Is the ConnectionManager--the piece supplied by the application server that is supposed to handle connection management, sharing, pooling, etc.--wholly present on the client at this point? One of its jobs is to create ManagedConnection instances, and a ManagedConnection is not required to be Serializable, and the user connection handles it vends are also not required to be Serializable. That suggests to me that the whole connection pooling machinery is shipped wholesale to the application client and stuffed into its JNDI tree. Does this all mean that JCA interactions from the client side bypass the server-side componentry of the application server? Where are the network boundaries in the JCA API?

    Read the article

  • [Ruby] Modifying object inside a loop doesn't change object outside of the loop?

    - by Jergason
    I am having problems with modifying objects inside blocks and not getting the expected values outside the blocks. This chunk of code is supposed to transform a bunch of points in 3d space, calculate a score (the rmsd or root mean squared deviation), and store both the score and the set of points that produced that score if it is lower than the current lowest score. At the end, I want to print out the best bunch of points. first = get_transformed_points(ARGV[0]) second = get_transformed_points(ARGV[1]) best_rmsd = first.rmsd(second) best_points = second #transform the points around x, y, and z and get the rmsd. If the new points # have a smaller rmsd, store them. ROTATION = 30 #rotate by ROTATION degrees num_rotations = 360/ROTATION radians = ROTATION * (Math::PI/180) num_rotations.times do |i| second = second * x_rotate num_rotations.times do |j| second = second * y_rotate num_rotations.times do |k| second = second * z_rotate rmsd = first.rmsd(second) if rmsd < best_rmsd then best_points = second best_rmsd = rmsd end end end end File.open("#{ARGV[1]}.out", "w") {|f| f.write(best_points.to_s)} I can print out the points that are getting stored inside the block, and they are getting transformed and stored correctly. However, when I write out the points to a file at the end, they are the same as the initial set of points. Somehow the best_points = second chunk doesn't seem to be doing anything outside of the block. It seems like there are some scoping rules that I don't understand here. I had thought that since I declared and defined best_points above, outside of the blocks, that it would be updated inside the blocks. However, it seems that when the blocks end, it somehow reverts back to the original value. Any ideas how to fix this? Is this a problem with blocks specifically?

    Read the article

  • Should we retire the term "Context"?

    - by MrGumbe
    I'm not sure if there is a more abused term in the world of programming than "Context." A word that has a very clear meaning in the English language has somehow morphed into a hot mess in software development, where the definition where the connotation can be completely different based on what library you happen to be developing in. Tomcat uses the word context to mean the configuration of a web application. Java applets, on the other hand, use an AppletContext to define attributes of the browser and HTML tag that launched it, but the BeanContext is defined as a container. ASP.NET uses the HttpContext object as a grab bag of state - containing information about the current request / response, session, user, server, and application objects. Context Oriented Programming defines the term as "Any information which is computationally accessible may form part of the context upon which behavioral variations depend," which I translate as "anything in the world." The innards of the Windows OS uses the CONTEXT structure to define properties about the hardware environment. The .NET installation classes, however, use the InstallContext property to represent the command line arguments entered to the installation class. The above doesn't even touch how all of us non-framework developers have used the term. I've seen plenty of developers fall into the subconscious trap of "I can't think of anything else to call this class, so I'll name it 'WidgetContext.'" Do you all agree that before naming our class a "Context," we may want to first consider some more descriptive terms? "Environment", "Configuraton", and "ExecutionState" come readily to mind.

    Read the article

  • C# WPF Show Image from Mysql

    - by user3718026
    i'm a student and i am bad at programing. I saved the images in my mysql database for each player. I created a program where I can list some soccer players from my database. When i click on a listed player in datagrid, a new window appears with the information about the player. Everything works, but now i want a picture of the selected player to be displayed on the information window from the database. Can anybody help me? My english is not the best (i'm 17) so i hope you can understand what i mean. This is what i tried to do but i don't know how to continue. PS. It's in WPF. MySqlCommand cmd = new MySqlCommand("SELECT Bilder FROM spieler WHERE Bilder='{8}'"); MySqlDataReader rdr1 = cmd.ExecuteReader(); try { conn.Open(); while (rdr1.Read()) { // image1... I don't know what to write here } } catch (Exception ex) { MessageBox.Show("Fehler: " + ex); } rdr1.Close()

    Read the article

  • Mixing menuItem.setIntent with onOptionsItemSelected doesn't work

    - by superjos
    While extending a sample Android activity that fires some other activities from its menu, I came to have some menu items handled within onOptionsItemSelected, and some menu items (that just fired intents) handled by calling setIntent within onCreateOptionsMenu. Basically something like: @Override public boolean onCreateOptionsMenu(Menu menu) { super.onCreateOptionsMenu(menu); menu.add(0, MENU_ID_1, Menu.NONE, R.string.menu_text_1); menu.add(0, MENU_ID_2, Menu.NONE, R.string.menu_text_2); menu.add(0, MENU_ID_3, Menu.NONE, R.string.menu_text_3). setIntent(new Intent(this, MyActivity_3.class)); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { super.onOptionsItemSelected(item); switch (item.getItemId()) { case (MENU_ID_1): // Process menu command 1 ... return true; case (MENU_ID_2): // Process menu command 2 ... // E.g. also fire Intent for MyActivity_2 return true; default: return false; } } Apparently, in this situation the Intent set on MENU_ID_3 is never fired, or anyway the related activity is never started. Android javadoc at some point goes like <<[if you set an intent on a menu item] and nothing else handles the item, then the default behavior will be to [start the activity with the intent]. What does it actually mean "and nothing else handles the item"? Is it enough to return false from onOptionsItemSelected? I also tried not to call super.onOptionsItemSelected(item) at the beginning and only invoke it in the default switch case, but I had same results. Does anyone have any suggestion? Does Android allow to mix the two type of handling? Thanks for your time everyone.

    Read the article

  • How do I make software that preserves database integrity and correctness? Please help, confused.

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • Weird behavior with windows startup C#

    - by FrieK
    Hi, I've created an application with the option to start on Windows startup. First I did this via the Registry, like this: private void RunOnStartup(bool RunOnStartup) { Microsoft.Win32.RegistryKey key = Microsoft.Win32.Registry.CurrentUser.OpenSubKey("SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Run", true); if (RunOnStartup) { key.SetValue(ProgramTitle, System.Windows.Forms.Application.ExecutablePath.ToString()); } else { key.DeleteValue(ProgramTitle, false); } } And this worked, but not correctly. It started the .exe but with the same behavior as it was a new one with the default 'config.xml' it needs. Which is obviously wrong. I did not manage to find out what was wrong, so I tried it differently: create a shortcut in the Startup folder. Couldn't go wrong I figured, I mean, it's just a shortcut right? I used this code: private void RunOnStartup(bool RunOnStartup) { string startup = Environment.GetFolderPath(Environment.SpecialFolder.Startup) + "\\"+ProgramTitle+".url"; if (RunOnStartup) { using (StreamWriter writer = new StreamWriter(startup)) { string app = System.Reflection.Assembly.GetExecutingAssembly().Location; writer.WriteLine("[InternetShortcut]"); writer.WriteLine("URL=file:///" + app); writer.WriteLine("IconIndex=0"); string icon = app.Replace('\\', '/'); writer.WriteLine("IconFile=" + icon); writer.Flush(); } } else { if (File.Exists(startup)) { File.Delete(startup); } } } And this worked as well, it started, but with the same behavior. So my question is, does anyone have any idea how this happens? Any help is much appreciated! Thanks

    Read the article

  • Why does this code sample produce a memory leak?

    - by citronas
    In the university we were given the following code sample and we were being told, that there is a memory leak when running this code. The sample should demonstrate that this is a situation where the garbage collector can't work. As far as my object oriented programming goes, the only codeline able to create a memory leak would be items=Arrays.copyOf(items,2 * size+1); The documentation says, that the elements are copied. Does that mean the reference is copied (and therefore another entry on the heap is created) or the object itself is being copied? As far as I know, Object and therefore Object[] are implemented as a reference type. So assigning a new value to 'items' would allow the garbage collector to find that the old 'item' is no longer referenced and can therefore be collected. In my eyes, this the codesample does not produce a memory leak. Could somebody prove me wrong? =) import java.util.Arrays; public class Foo { private Object[] items; private int size=0; private static final int ISIZE=10; public Foo() { items= new Object[ISIZE]; } public void push(final Object o){ checkSize(); items[size++]=o; } public Object pop(){ if (size==0) throw new ///... return items[--size]; } private void checkSize(){ if (items.length==size){ items=Arrays.copyOf(items,2 * size+1); } } }

    Read the article

  • How expensive is a context switch? Is it better to implement a manual task switch than to rely on OS

    - by Vilx-
    The title says it all. Imagine I have two (three, four, whatever) tasks that have to run in parallel. Now, the easy way to do this would be to create separate threads and forget about it. But on a plain old single-core CPU that would mean a lot of context switching - and we all know that context switching is big, bad, slow, and generally simply Evil. It should be avoided, right? On that note, if I'm writing the software from ground up anyway, I could go the extra mile and implement my own task-switching. Split each task in parts, save the state inbetween, and then switch among them within a single thread. Or, if I detect that there are multiple CPU cores, I could just give each task to a separate thread and all would be well. The second solution does have the advantage of adapting to the number of available CPU cores, but will the manual task-switch really be faster than the one in the OS core? Especially if I'm trying to make the whole thing generic with a TaskManager and an ITask, etc?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • How do I distribute my slightly customized Linux in VirtualBox?

    - by goodrone
    Suppose I've got some Arch Linux installation which I'd like to distribute among students with (sometimes very) basic Linux knowledge to make them able to compile C programs in an environment very similar to that in the university. (Things like Cygwin or MinGW seem to be inappropriate here.) I also choose VirtualBox as a holder for the virtual system. The question is: how do I distribute it? I mean: installing VirtualBox on the target machine (if not still installed) uncompressing and copying my image file (.VDI) registering the image (so that VirtualBox could see it when launched) configuring the guest system in VirtualBox (network, memory, etc.) optionally installing PuTTY to simplify interfacing with the guest Linux Should I create an installer? Which one? Or just write some .BAT-scripts? (Target host system is Windows, mostly XP and Vista.) I definately don't want to have a webpage with screen shot explaining where to click and what to press, because it's boring. Additionaly, what will be the best (the most user-friendly) way to configure network when the guest Linux system is run for the first time?

    Read the article

  • Rails partial gets double escaped when using link_to_function

    - by dombesz
    Hi, I have the following code. def add_resume_link(name, form) link_to_function name do |page| html = form.fields_for :resumes, @general_resume.resumes.build, :child_index => 'NEW_RECORD' do |form_parent| render :partial => 'resume_form', :locals=>{:form=>form_parent} end page << "$('resumes').insert({ bottom: '#{escape_javascript(html)}'.replace(/NEW_RECORD/g, id) });" end end And on the resume_form i have somewhere: =add_skill_link("Add Skill", form, "resume_#{id}_skills") and the function looks like: def add_skill_link(name, form, id) link_to_function name do |page| html = form.fields_for :skill_items, @general_resume.skill_items.build, :child_index => 'NEW_RECORD' do |form_parent| render :partial=>'skill_form', :locals=>{:form=>form_parent, :parent=>id} end page << "$('#{id}').insert({ bottom: '#{escape_javascript(html)}'.replace(/NEW_RECORD/g, new Date().getTime()) });" end end So basically i have a javascript code which dinamically adds a piece of html (add_resume) and contains another javascript code which dinamically adds a select box to the page. My problem is that the add_skill_link works fine if i use from the server side, i mean rendering from server side. And gets double escaped when using within the upper described way. I tried to remove the escape_javascript from the add_skill_link bit still not good. Any ideas?

    Read the article

  • PHP Multiple navigation highlights

    - by Blackbird
    I'm using this piece of code below to highlight the "active" menu item on my global navigation. <?php $path = $_SERVER['PHP_SELF']; $page = basename($path); $page = basename($path, '.php'); ?> <ul id="nav"> <li class="home"><a href="index.php">Home</a></li> <li <?php if ($page == 'search') { ?>class="active"<?php } ?>><a href="#">Search</a></li> <li <?php if ($page == 'help') { ?>class="active"<?php } ?>><a href="help.php">Help</a></li> </ul> All works great but, on a couple of pages, I have a second sub menu (sidebar menu) within a global page. I basically need to add a OR type statement into the php somehow, but I haven't a clue a how. Example of what i mean: <li <?php if ($page == 'help') OR ($page == 'help-overview') OR ($page == 'help-cat-1') { ?>class="active"<?php } ?>><a href="#">Search</a></li> Thanks!

    Read the article

  • OpenGL Mipmapping: how does OpenGL decide on map level?

    - by Droozle
    Hi, I am having trouble implementing mipmapping in OpenGL. I am using OpenFrameworks and have modified the ofTexture class to support the creation and rendering of mipmaps. The following code is the original texture creation code from the class (slightly modified for clarity): glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); glTexSubImage2D(texData.textureTarget, 0, 0, 0, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glDisable(texData.textureTarget); This is my version with mipmap support: glEnable(texData.textureTarget); glBindTexture(texData.textureTarget, (GLuint)texData.textureID); gluBuild2DMipmaps(texData.textureTarget, texData.glTypeInternal, w, h, texData.glType, texData.pixelType, data); glTexParameteri(texData.textureTarget, GL_TEXTURE_MAG_FILTER, GL_LINEAR_MIPMAP_LINEAR); glTexParameteri(texData.textureTarget, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); glDisable(texData.textureTarget); The code does not generate errors (gluBuild2DMipmaps returns '0') and the textures are rendered without problems. However, I do not see any difference. The scene I render consists of "flat, square tiles" at z=0. It's basically a 2D scene. I zoom in and out by using "glScale()" before drawing the tiles. When I zoom out, the pixels of the tile textures start to "dance", indicating (as far as I can tell) unfiltered texture look-up. See: http://www.youtube.com/watch?v=b_As2Np3m8A at 25s. My question is: since I do not move the camera position, but only use scaling of the whole scene, does this mean OpenGL can not decide on the appropriate mipmap level and uses the full texture size (level 0)? Paul

    Read the article

  • Questions on Juval Lowy's IDesign C# Coding Standard

    - by Jan
    We are trying to use the IDesign C# Coding standard. Unfortunately, I found no comprehensive document to explain all the rules that it gives, and also his book does not always help. Here are the open questions that remain for me (from chapter 2, Coding Practices): No. 26: Avoid providing explicit values for enums unless they are integer powers of 2 No. 34: Always explicitly initialize an array of reference types using a for loop No. 50: Avoid events as interface members No. 52: Expose interfaces on class hierarchies No. 73: Do not define method-specific constraints in interfaces No. 74: Do not define constraints in delegates Here's what I think about those: I thought that providing explicit values would be especially useful when adding new enum members at a later point in time. If these members are added between other already existing members, I would provide explicit values to make sure the integer representation of existing members does not change. No idea why I would want to do this. I'd say this totally depends on the logic of my program. I see that there is alternative option of providing "Sink interfaces" (simply providing already all "OnXxxHappened" methods), but what is the reason to prefer one over the other? Unsure what he means here: Could this mean "When implementing an interface explicitly in a non-sealed class, consider providing the implementation in a protected virtual method that can be overridden"? (see Programming .NET Components 2nd Edition, end of chapter “Interfaces and Class Hierarchies”). I suppose this is about providing a "where" clause when using generics, but why is this bad on an interface? I suppose this is about providing a "where" clause when using generics, but why is this bad on a delegate?

    Read the article

  • iphone viewWillAppear not firing

    - by chzk
    I've read numerous posts about people having problems with viewWillAppear when you do not create your view heirarchy JUST right. My problem is I can't figure out what that means. If I create a RootViewController and call addSubView on that controller, I would expect the added view(s) to be wired up for viewWillAppear events. Does anyone have an example of a complex programmatic view heirarchy that successfully recieves viewWillAppear events at every level? Apple Docs state: Warning: If the view belonging to a view controller is added to a view hierarchy directly, the view controller will not receive this message. If you insert or add a view to the view hierarchy, and it has a view controller, you should send the associated view controller this message directly. Failing to send the view controller this message will prevent any associated animation from being displayed. The problem is that they don't describe how to do this. What the hell does "directly" mean. How do you "indirectly" add a view. I am fairly new to Cocoa and iPhone so it would be nice if there were useful examples from Apple besides the basic Hello World crap. Any help is greatly appreciated...

    Read the article

  • What are the best software/website UI design you have even seen?

    - by Edwin
    What are the best UI design in terms of usability and esthetics you have even seen? I mean both desktop software (of all OS) and website. My list: Picasa 3 - the way it organizes photos. Find-and-highlight-as-you-type in google Chrome. Dynamic search hints when entering something in the search box in Gmail. I'm not a Mac OS X user, but I have seen in most windows on the top toolbar there are both the icons and texts shown for each function, as apposed to on Windows I have seen many programs (MS Office included) have many small toolbar icons which you can hardly understand what they do until you hover the mouse on it for a while to see the hints (if any). The ability to search an setting in Eclipse IDE. the way to make 3D models in Google Sketchup. the way to label an email in Gmail. What are you list? Well, I couldn't resist to list some annoying UI design I have experienced and remember at this moment. IE on Windows server, when you visit the new website, you have to click many times to get it added to the white list before you can start browsing, IIRC, it's not fixed in IE 8 when that last time I used it on Windows 2008. The default search behavior in the File Explorer on Windows xp, that animated thing... the dialog that shows up when you are trying to save a plain text CSV file in Excel after applied some formatting options which does not compatible with CSV.

    Read the article

  • Source folders for a maven project in eclipse

    - by 4NDR01D3
    Hello all, I have a that uses maven... and I want to put it in my working environment with eclipse(Galileo)... the project is in a svn server, and I can create check out the project and everything looks OK. I even can run the unit test and everything is working there. However, now that everything is there I wanted to work in the code, and oh surprise there are no packages in my project... I mean all the source code is in the src folder and browsing through it i can see all my files, ut if I open the files from there, the files are opened as text files with no coloring, but worst no help at all about errors in compilation. I don't know what im I doing wrong now, because I had the same project in other machine and it was working well. So here is what I did, please let me know if you notice if I did something wrong, miss any steps or anything that can help me: In the SVN Repository (Using subclipse 1.6.10) I added my SVN Repository Browsed to the folder where I have the pom file Right Click Check out as a Maven project...(Using m2eclipse 0.10.020100209) Used the default options and finish. The projects were created with no problem. I said projects because this maven project has modules, and each module became a project in eclipse. Back in the java perspective, Right click in the project, Run as maven test(Using JWebUnitTest, because I am testing a servlet) BUILD SUCCESS!! But as I said there is not packages so I can't really develop in this environment. Any help?? Thanks!

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • How Best to Replace Ugly Queries and Dynamic PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and sometimes requiring dynamic SQL. That is, sometimes SQL queries require set based processing along with significant procedural processing. This is what PL/SQL is custom made for, but as a language it does not begin to compare to C#. Now and then I convert a PL/SQL procedure to C#, and am always amazed at how much cleaner and easier to both read and write the C# version is. The C# program might for example construct a SQL query string piece by piece and/or run several queries and process them as needed. The C# version is usually much faster as well, which must mean that I'm not very good at PL/SQL either. I do not currently have access to LINQ. My question is, how best to package all these little C# programs, which are really just mini reports, that is, replacements for ugly SQL queries? Right now I'm actually using NUnit to hold them, and calling each report a [Test], even though they aren't really tests. NUnit just happens to provide a convenient packaging framework.

    Read the article

  • Spork doesnt reload code

    - by there-is-no-spoon
    I am using following gems and ruby-1.9.3-p194: rails 3.2.3 rspec-rails 2.9.0 spork 1.0.0rc2 guard-spork 0.6.1 Full list of used gems is available in this Gemfile.lock or Gemfile. And I am using this configuration files: Guardfile .rspec spec_helper.rb factories.rb If I modify any model (or custom validator in app/validators etc) reloading code doesnt works. I mean when I run specs (hit Enter on guard console) Spork contain "old code" and I got obsolete error messages. But when I manually restart Guard and Spork (CTRC-C CTRL-d guard) everything works fine. But it is getting tired after few times. Questions: Can somebody look at my config files please and fix error which block updating code. Or maybe this is an issue of newest Rails version? PS This problem repeats and repeats over some projects (and on some NOT). But I haven't figured out yet why this is happens. PS2 Perhaps this problem is something to do with ActiveAdmin? When I change file in app/admin code is reloaded.

    Read the article

  • Efficient Multiplication of Varying-Length #s [Conceptual]

    - by Milan Patel
    Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm. I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this: a * b = a/2 * 2b if a is even a * b = (a-1)/2 * 2b + a if a is odd My pseudocode is: rpa(x, y){ if x is 1 return y if x is even return rpa(x/2, 2y) if x is odd return rpa((x-1)/2, 2y) + y } I have 3 questions: Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically... Can I apply the Master's Theorem to understand the complexity...? a = # subproblems in recursion = 1 (max 1 recursive call across all states) n / b = size of each subproblem = n / 1 - b = 1 (problem doesn't change size...?) f(n^d) = work done outside recursive calls = 1 - d = 0 (the addition when a is odd) a = 1, b^d = 1, a = b^d - complexity is in n^d*log(n) = log(n) this makes sense logically since we are halving the problem at each step, right? What might my professor mean by providing arbitrary length numbers "as strings". Why do that? Many thanks in advance

    Read the article

  • Do vs. Run vs. Execute vs. Perform verbs

    - by coffeeaddict
    Before anyone starts to go nuts and red flag this post saying this is "Subjective" which drives me absolutely nuts because everyone has their own intent why they are posting something others feel are subjective. Subjective is subjective to each person, how about that! So with that let me tell you a couple things so that this post does not get flagged by flag happy moderators: 1) There are community guidlines on specific keywords recommended by certain organizations or people (e.g. Microsoft, Lance Hunt, etc.) 2) I want to know what others are using the most and why. Why they feel this verb reads better than others 3) Books even talk about this verb issue (Uncle Bob, etc.), so it's not subjective Now to my actual question: a) What list of verbs are you using for method names? What's your personal or team standard? b) I debate whether to use Do vs. Run vs. Execute vs. Perform and am wondering if any of these are no longer recommended or some that people just don't really use and I should just scratch them. Basically any one of those verbs mean the same thing...to invoke some process (method call). This is outside of CRUDs. For example: ExecutePayPalWorkflow(); that could be also any one of these names instead: DoPayPalWorkflow(); RunPayPalWorkflow(); PerformPayPalWorkflow(); or does it not really matter...because any of those verbs pretty much are understandable as to "what" shows your intent by the other words that follow it "PayPalWorkflow" This discussion can go for any language. I just put the two main tags C# and Java here which is good enough for me to get some solid answers or experiences.

    Read the article

  • Types in Haskell

    - by Linda Cohen
    I'm kind of new in Haskell and I have difficulty understanding how inferred types and such works. map :: (a -> b) -> [a] -> [b] (.) :: (a -> b) -> (c -> a) -> c -> b What EXACTLY does that mean? foldr :: (a -> b -> b) -> b -> [a] -> b foldl :: (a -> b -> a) -> a -> [b] -> a foldl1 :: (a -> a -> a) -> [a] -> a What are the differences between these? And how would I define the inferred type of something like foldr map THANKS!

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >