Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 707/854 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • Remove duplicate records/objects uniquely identified by multiple attributes

    - by keruilin
    I have a model called HeroStatus with the following attributes: id user_id recordable_type hero_type (can be NULL!) recordable_id created_at There are over 100 hero_statuses, and a user can have many hero_statuses, but can't have the same hero_status more than once. A user's hero_status is uniquely identified by the combination of recordable_type + hero_type + recordable_id. What I'm trying to say essentially is that there can't be a duplicate hero_status for a specific user. Unfortunately, I didn't have a validation in place to assure this, so I got some duplicate hero_statuses for users after I made some code changes. For example: user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2010-05-03 18:30:30' user_id = 18 recordable_type = 'Evil' hero_type = 'Halitosis' recordable_id = 1 created_at = '2009-03-03 15:30:00' user_id = 18 recordable_type = 'Good' hero_type = 'Hugs' recordable_id = 1 created_at = '2009-02-03 12:30:00' user_id = 18 recordable_type = 'Good' hero_type = NULL recordable_id = 2 created_at = '2009-012-03 08:30:00' (Last two are not a dups obviously. First two are.) So what I want to do is get rid of the duplicate hero_status. Which one? The one with the most-recent date. I have three questions: How do I remove the duplicates using a SQL-only approach? How do I remove the duplicates using a pure Ruby solution? Something similar to this: http://stackoverflow.com/questions/2790004/removing-duplicate-objects. How do I put a validation in place to prevent duplicate entries in the future?

    Read the article

  • Audio output from Silverlight

    - by leecarter
    I'm looking to develop a Silverlight application which will take a stream of data (not an audio stream as such) from a web server. The data stream would then be manipulated to give audio of a certain format (G.711 a-Law for example) which would then be converted into PCM so that additional effects can be applied (such as boosting the volume). I'm OK up to this point. I've got my data, converted the G.711 into PCM but my problem is being able to output this PCM audio to the sound card. I basing a solution on some C# code intended for a .Net application but in Silverlight there is a problem with trying to take a copy of a delegate (function pointer) which will be the topic of a separate question once I've produced a simple code sample. So, the question is... How can I output the PCM audio that I have held in a data structure (currently an array) in my Silverlight to the user? (Please don't say write the byte values to a text box) If it were a MP3 or WMA file I would play it using a MediaElement but I don't want to have to make it into a file as this would put a crimp on applying dynamic effects to the audio. I've seen a few posts from people saying low level audio support is poor/non-existant in Silverlight so I'm open to any suggestions/ideas people may have.

    Read the article

  • Global hotkey capture in VB.net

    - by ggonsalv
    I want to have my app which is minimized to capture data selected in another app's window when the hot key is pressed. My app definitely doesn't have the focus. Additionally when the hot key is pressed I want to present a fading popup (Outlook style) so my app never gets focus. At a minimum I want to capture the Window name, Process ID and the selected data. The app which has focus is not my application? I know one option is to sniff the Clipboard, but are there any other solutions. This is to audit the rate of data-entry in to another system of which I have no control. It is a mainframe emulation client program(attachmate). The plan is complete data entry in Application X. Select a certain section of the screen in App X which is proof of data entry (transaction ID). Press the Magic Hotkey, which then 'sends' the selection to my App. From System.environment or system.Threading I can find the Windows logon. Similiarly I can also capture the time. All the data will be logged to SQL. Once Complete show Outlook style pop up saying the data entry has been logged. Any thoughts.

    Read the article

  • Technical choices in unmarshaling hash-consed data

    - by Pascal Cuoq
    There seems to be quite a bit of folklore knowledge floating about in restricted circles about the pitfalls of hash-consing combined with marshaling-unmarshaling of data. I am looking for citable references to these tidbits. For instance, someone once pointed me to library aterm and mentioned that the authors had clearly thought about this and that the representation on disk was bottom-up (children of a node come before the node itself in the data stream). This is indeed the right way to do things when you need to re-share each node (with a possible identical node already in memory). This re-sharing pass needs to be done bottom-up, so the unmarshaling itself might as well be, too, so that it's possible to do everything in a single pass. I am in the process of describing difficulties encountered in our own context, and the solutions we found. I would appreciate any citable reference to the kind of aforementioned folklore knowledge. Some people obviously have encountered the problems before (the aterm library is only one example). But I didn't find anything in writing. Even the little piece of information I have about aterm is hear-say. I am not worried it's not reliable (you can't make this up), but "personal communication" and "look how it's done in the source code" are considered poor form in citations. I have enough references on hash-consing alone. I am only interested in references where it interferes with other aspects of programming, such as marshaling or distribution.

    Read the article

  • Content Management Systems for Adaptive Content [closed]

    - by andrewap
    Content management systems (CMS) allow us to easily maintain blogs, news sites, general websites, and so on. Many of them are designed to manage pages of content, and provide tools to organize and customize how that content is displayed on the web. However, as explained by Mark Boulton in his Adaptive Content Management article, and by Karen McGrane in her talk on Adapting Ourselves to Adaptive Content, we are increasingly delivering content not just to the web, but also to other platforms and channels. We need tools to manage pieces of content with meaningful metadata attached. Create once, publish everywhere. The main idea is to store content cleanly, without intertwining it with presentation markup specific to the web. Because pieces of content is compartmentalized semantically, it can easily adapt to fit in different platforms and channels. Hence, it's called adaptive content. Let's look at a quick example to compare: Say I manage news articles and events. To create a news article, I would tell the CMS the type of content I'm creating, and be asked to fill in a form with individual fields tailored to news articles (e.g. headline, subtitle, full text, short snippet, and images). — i.e. pieces of content With a traditional web publishing tool, I would probably have had to create a new page under News, and then type in and format the news article in a blank WYSIWYG text editor. — i.e. pages of content As you can see, the first design allows me to individually specify content in its smallest semantic unit. When I want to display or consume it, the system can easily provide the pieces I need. So here's my question: Is there a CMS that is designed specifically with adaptive content in mind, and that is decoupled with the presentation layer? Note: This is not a discussion about the best CMS, or which CMS I should use. I am asking whether a very specific type of tool — CMS designed for adaptive content — exists for developers to use.

    Read the article

  • How do I update a progress bar in Cocoa during a long running loop?

    - by Nic
    Hi, I've got a while loop, that runs for many seconds and that's why I want to update a progress bar (NSProgressIndicator) during that process, but it updates only once after the loop has finished. The same happens if I want to update a label text, by the way. I believe, my loop prevents other things of that application to happen. There must be another technique. Does this have to do with threads or something? Am I on the right track? Can someone please give me a simple example, how to “optimize” my application? My application is a Cocoa Application (Xcode 3.2.1) with these two methods in my Example_AppDelegate.m: // This method runs when a start button is clicked. - (IBAction)startIt:(id)sender { [progressbar setDoubleValue:0.0]; [progressbar startAnimation:sender]; running = YES; // this is a instance variable int i = 0; while (running) { if (i++ = processAmount) { // processAmount is something like 1000000 running = NO; continue; } // Update progress bar double progr = (double)i / (double)processAmount; NSLog(@"progr: %f", progr); // Logs values between 0.0 and 1.0 [progressbar setDoubleValue:progr]; [progressbar needsDisplay]; // Do I need this? // Do some more hard work here... } } // This method runs when a stop button is clicked, but as long // as -startIt is busy, a click on the stop button does nothing. - (IBAction)stopIt:(id)sender { NSLog(@"Stop it!"); running = NO; [progressbar stopAnimation:sender]; } I'm really new to Objective-C, Cocoa and applications with a UI. Thank you very much for any helpful answer.

    Read the article

  • What languages should a microISV use to write commercial software?

    - by Wal
    I've been writing software in Java for many years now, but it was always for internal applications that would be deployed to a server. I'd like to get into writing desktop applications now but I don't know where to start. I've written a few Java/Swing applications but again they were for internal use. My understanding is that Java and other semi-compiled and interpreted languages are too easy to reverse engineer, making them unsuitable for commercial software. I am aware that there are compilers for Java and some other interpreted language, but I've also heard that they are pricey and/or unreliable. Assuming I start a microISV and wish to develop and sell applications to a broad audience, what's my best bet? I would prefer something that can be written close to once, and compiled for different operating systems but I am not opposed to .NET and a Windows-only audience if other languages would compromise the experience (installation ease & user experience) in Windows. My only issue there is that I don't have a large starting budget and paying out the wazoo for the required development tools is not really in the cards.

    Read the article

  • Trouble Running Leaks Instrument

    - by TheGeoff
    I'm having trouble running the Leaks Instrument since installing the 3.0 SDK. An NDA disclaimer here I don't think this is a 3.0 SDK issue, just a configuration problem. So I'm looking for advice on configuring the tools in question not the 3.0 SDK per se. Here’s the breakdown of the behavior I am seeing. My Application is compiled to OS version 2.2. I can run it out of XCode in debug mode on the Simulator and Device running 2.2, 2.2.1, 3.0. If I start it with Performance Tools - Leaks, I get an error message from the OS, “The application xxxx quit unexpectedly”, “Ignore, Report, Relaunch.” If I click “Ignore” one of two things will happen, either Leaks tells me it couldn’t attach, or Leaks stop responding to input and I have to Force Quit. Interesting thing is the Simulator starts in 3.0 OS. If I start Instruments Manually and attach to a running 2.2 Simulator it shows the same behavior. If I attach Leaks to an iPhone Device it works. It seems that once I launch Leaks my app won't run in the simulator until I do a new build. Any ideas for getting my Simulator/Leaks/Xcode synced back up? Thanks, Geoff

    Read the article

  • Creating futures using Apple's GCD

    - by jer
    I'm working on a library which implements the actor model on top of Grand Central Dispatch (specifically the C level API libdispatch). Basically a brief overview of my system is as such: Communication happens between actors using messages Multicast communication only (one actor to many actors) Senders and receivers are decoupled from one another using a blackboard where messages are pushed to. Messages are sent in the default queue asynchronously using dispatch_group_async() once a message gets pushed onto the blackboard. I'm trying to implement futures in the language right now, so I've created a new type which holds some information: A group of its own The value being 'returned' However, I have a problem since dispatch_block_t is of type void (^)(void) so it doesn't return anything. So my idea of in my future_new() function of setting up another group which can be used to execute a block returning a result, which I can store in my "value" member in my future_t structure, isn't going to work. The rest of the futures implementation is very clear, except it all depends on being able to get the value into the future back from the actor, acting on the message. When using the library, it would greatly reduce its usefulness if I had to ask users (and myself) to be aware when futures were going to be used by other parts of the system—It just isn't practical. I'm wondering if anyone can think of a way around this?

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Twitter bootstrap modal loads wrong remote data

    - by Victor S
    I'm using Twitter Bootstrap modal featurs and loading data from remote locations. I'm providing the remote url for a set of thumbnails with the hope that once the thumbnail is clicked, the appropriate data (a large version of the image) is displayed. I'm using the html declarative style to define the remote urls and all the features of the modal. What I find is that Twitter bootstrap modal loads first remote url then does not display subsequent remote data, (although a request to the proper url is made in Chrome) but displays first loaded data always. How do I get it to show the proper data? View: #gallery-navigation %ul - @profile.background_images.each do |image| %li = link_to image_tag(image.background_image.url(:thumb)), remote_image_path(image.id), :role => "button", :data => {:toggle => "modal", :target => "#image-modal", :remote => remote_image_path(image.id)}, :id => "image-modal" / Modal #image-modal.modal.hide.fade(role="dialog" aria-hidden="true" data-backdrop="true") .modal-body Controller: def remote_image @image = current_user.profile.background_images.find(params[:image_id]) respond_to do |format| format.html { render :partial => "remote_image", :locals => { :image => @image } } end end

    Read the article

  • To implement a remote desktop sharing solution

    - by Cameigons
    Hi, I'm on planning/modeling phase to develop a remote desktop sharing solution, which must be web browser based. In other words: an user will be able to see and interact with someone's remote desktop using his web-browser. Everything the user who wants to share his desktop will need, besides his browser, is installing an add-in, which he's going to be prompted about when necessary. The add-in is required since (afaik) no browser technology allows desktop control from an app running within the browser alone. The add-in installation process must be as simple and transparent as possible to the user (similar to AdobeConnectNow, in case anyone's acquainted with it). The user can share his desktop with lots of people at the same time, but concede desktop control to only one of them at a time(makes no sense being otherwise). Project requirements: All technology employed must be open-source license compatible Both front ends are going to be in flash (browser) Must work on Linux, Windows XP(and later) and MacOSX. Must work at least with IE7(and later) and Firefox3.0(and later). At the very least, once the sharer's stream hits the server from where it'll be broadcast, hereon it must be broadcasted in flv (so I'm thinking whether to do the encoding at the client's machine (the one sharing the desktop) or send it in some other format to the server and encode it there). Performance and scalability are important: It must be able to handle hundreds of dozens of users(one desktop sharer, the rest viewers) We'll definitely be using red5. My doubts concern mostly implementing the desktop publisher side (add-in and streamer): 1) Are you aware of other projects that I could look into for ideas? (I'm aware of bigbluebutton.org and code.google.com/p/openmeetings) 2) Should I base myself on VNC ? 3) Bearing in mind the need to have it working cross-platform, what language should I go with? (My team is very used with java and I have some knowledge of C/C++, but anything goes really). 4) Any other advices are appreciated.

    Read the article

  • KODO: how set up fetch plan for bidirectional relationships?

    - by BestPractices
    Running KODO 4.2 and having an issue inefficient queries being generated by KODO. This happens when fetching an object that contains a collection where that collection has a bidrectional relationship back to the first object. Class Classroom { List<Student> _students; } Class Student { Classroom _classroom; } If we create a fetch plan to get a list of Classrooms and their corresponding Students by setting up the following fetch plan: fetchPlan.addField(Classroom.class,”_students”); This will result in two queries (get the classrooms and then get all students that are in those classrooms), which is what we would expect. However, if we include the reference back to the classroom in our fetch plan in order for the _classroom field to get populated by doing fetchPlan.addField(Student.class, “_classroom”), this will result in X number of additional queries where X is the number of students in each classroom. Can anyone explain how to fix this? KODO already has the original Classroom objects at the point that it's executing the queries to retrieve the Classroom objects and set them in each Student object's _classroom field. So I would expect KODO to simply set those objects in the _classroom field on each Student object accordingly and not go back to the database. Once again, the documentation is sorely lacking with Kodo/JDO/OpenJPA but from what I've read it should be able to do this more efficiently. Note-- EAGER_FETCH.PARALLEL is turned on and I have tried this with caching (query and data caches) turned on and off and there is no difference in the resultant queries.

    Read the article

  • Question about the evolution of interaction paradigm between web server program and content provider program?

    - by smwikipedia
    Hi experts, In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content. AFAIK, this kind of dynamice content generation technologies include the following: CGI ISAPI ... And from here, I noticed that: ...In IIS 7, modules replace ISAPI filters... Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache. I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens? Many thanks.

    Read the article

  • IIS7 integrated mode MVC deploy 404 not found on some actions

    - by majkinetor
    Hello. Once deployed parts of my web-application stop working. Index-es on each controller do work, and one form posting via Ajax, other then that yields 404. I understand that nothing particular should be done in integrated mode. One interesting thing is that 1 AJAX action is working. I don't know how to proceed with troubleshooting. Some info: App is using default app pool set to integrated mode. WebApp is done in net framework 3.5. I use default routing model. Along web.config in root there is web.config in /View folder referencing HttpNotFoundHandler. OS is Windows Server 2008. Admins issued aspnet_regiis.exe -i IIS 7 Any help is appreciated. Thx. EDIT: Noticed one very strange thing. One ajax call works, other ajax call doesn't works as I login, but when I move to ther page and return to that one it starts working ?! Small video of the problem is available here

    Read the article

  • How to do binding programmically?

    - by user175908
    Hello, Could anyone identify the problem in this code? (I'm kinda newbie in WPF bindings.) This code executes after chart is loaded when I click a button: I get this error: Update: I dont get that error anymore. Thanks to Tomas. Now no error occur but chart looks completely blank (no columns) Update: Code now looks like this: // create a very simple DataSet var dataSet = new DataSet("MyDataSet"); var table = dataSet.Tables.Add("MyTable"); table.Columns.Add("Name"); table.Columns.Add("Price"); table.Rows.Add("Brick", 1.5d); table.Rows.Add("Soap", 4.99d); table.Rows.Add("Comic Book", 0.99d); // chart series var series = new ColumnSeries() { IndependentValueBinding = new Binding("[Name]"), // How to deal with DependentValueBinding = new Binding("[Price]"), // these two? ItemsSource = dataSet.Tables[0].DefaultView // or maybe I do mistake here? }; // ---------- set additional binding as adviced ------------------ series.SetBinding(ColumnSeries.ItemsSourceProperty, new Binding()); // chart stuff MyChart.Series.Add(series); MyChart.Title = "Names 'n Prices"; // some code to remove legend var style = new Style(typeof(Control)); style.Setters.Add(new Setter(LegendItem.TemplateProperty, null)); MyChart.LegendStyle = style; XAML: <Window x:Class="BindingzTest.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="606" Width="988" xmlns:charting="clr-namespace:System.Windows.Controls.DataVisualization.Charting;assembly=System.Windows.Controls.DataVisualization.Toolkit"> <Grid Name="LayoutRoot"> <charting:Chart Name="MyChart" Margin="0,0,573,0" Height="289" VerticalAlignment="Top" /> <Button Content="Button" Height="23" HorizontalAlignment="Left" Margin="272,361,0,0" Name="button1" VerticalAlignment="Top" Width="75" Click="chart1_Loaded" /> </Grid> Thanks for help in advance once more.

    Read the article

  • Is there a practical benefit to casting a NULL pointer to an object and calling one of its member fu

    - by zdawg
    Ok, so I know that technically this is undefined behavior, but nonetheless, I've seen this more than once in production code. And please correct me if I'm wrong, but I've also heard that some people use this "feature" as a somewhat legitimate substitute of a lacking aspect of the current C++ standard, namely, the inability to obtain the address (well, offset really) of a member function. For example, this is out of a popular implementation of a PCRE (Perl-compatible Regular Expression) library: #ifndef offsetof #define offsetof(p_type,field) ((size_t)&(((p_type *)0)->field)) #endif One can debate whether the exploitation of such a language subtlety in a case like this is valid or not, or even necessary, but I've also seen it used like this: struct Result { void stat() { if(this) // do something... else // do something else... } }; // ...somewhere else in the code... ((Result*)0)->stat(); This works just fine! It avoids a null pointer dereference by testing for the existence of this, and it does not try to access class members in the else block. So long as these guards are in place, it's legitimate code, right? So the question remains: Is there a practical use case, where one would benefit from using such a construct? I'm especially concerned about the second case, since the first case is more of a workaround for a language limitation. Or is it? PS. Sorry about the C-style casts, unfortunately people still prefer to type less if they can.

    Read the article

  • What are simple instructions for creating a Python package structure and egg?

    - by froadie
    I just completed my first (minor) Python project, and my boss wants me to package it nicely so that it can be distributed and called from other programs easily. He suggested I look into eggs. I've been googling and reading, but I'm just getting confused. Most of the sites I'm looking at explain how to use Python eggs that were already created, or how to create an egg from a setup.py file (which I don't yet have). All I have now is an Eclipse pydev project with about 4 modules and a settings/configuration file. In easy steps, how do I go about structuring it into folders/packages and compiling it into an egg? And once it's an egg, what do I have to know about deploying/building/using it? I'm really starting from scratch here, so don't assume I know anything; simple step-by-step instructions would be really helpful... These are some of the sites that I've been looking at so far: http://peak.telecommunity.com/DevCenter/PythonEggs http://www.packtpub.com/article/writing-a-package-in-python http://www.ibm.com/developerworks/library/l-cppeak3.html#N10232 I've also browsed a few SO questions but haven't really found what I need. Thanks!

    Read the article

  • PostgreSQL triggers and passing parameters

    - by iandouglas
    This is a multi-part question. I have a table similar to this: CREATE TABLE sales_data ( Company character(50), Contract character(50), top_revenue_sum integer, top_revenue_sales integer, last_sale timestamp) ; I'd like to create a trigger for new inserts into this table, something like this: CREATE OR REPLACE FUNCTION add_contract() RETURNS VOID DECLARE myCompany character(50), myContract character(50), BEGIN myCompany = TG_ARGV[0]; myContract = TG_ARGV[1]; IF (TG_OP = 'INSERT') THEN EXECUTE 'CREATE TABLE salesdata_' || $myCompany || '_' || $myContract || ' ( sale_amount integer, updated TIMESTAMP not null, some_data varchar(32), country varchar(2) ) ;' EXECUTE 'CREATE TRIGGER update_sales_data BEFORE INSERT OR DELETE ON salesdata_' || $myCompany || '_' || $myContract || ' FOR EACH ROW EXECUTE update_sales_data( ' || $myCompany || ',' || $myContract || ', revenue);' ; END IF; END; $add_contract$ LANGUAGE plpgsql; CREATE TRIGGER add_contract AFTER INSERT ON sales_data FOR EACH ROW EXECUTE add_contract() ; Basically, every time I insert a new row into sales_data, I want to generate a new table where the name of the table will be defined as something like "salesdata_Company_Contract" So my first question is how can I pass the Company and Contract data to the trigger so it can be passed to the add_contract() stored procedure? From my stored procedure, you'll see that I also want to update the original sales_data table whenever new data is inserted into the salesdata_Company_Contract table. This trigger will do something like this: CREATE OR REPLACE FUNCTION update_sales_data() RETURNS trigger as $update_sales_data$ DECLARE myCompany character(50) NOT NULL, myContract character(50) NOT NULL, myRevenue integer NOT NULL BEGIN myCompany = TG_ARGV[0] ; myContract = TG_ARGV[1] ; myRevenue = TG_ARGV[2] ; IF (TG_OP = 'INSERT') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales + 1, top_revenue_sum = top_revenue_sum + $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; ELSIF (TG_OP = 'DELETE') THEN UPDATE sales_data SET top_revenue_sales = top_revenue_sales - 1, top_revenue_sum = top_revenue_sum - $myRevenue, updated = now() WHERE Company = $myCompany AND Contract = $myContract ; END IF; END; $update_sales_data$ LANGUAGE plpgsql; This will, of course, require that I pass several parameters around within these stored procedures and triggers, and I'm not sure (a) if this is even possible, or (b) practical, or (c) best practice and we should just put this logic into our other software instead of asking the database to do this work for us. To keep our table sizes down, as we'll have hundreds of thousands of transactions per day, we've decided to partition our data using the Company and Contract strings as part of the table names themselves so they're all very small in size; file IO for us is faster and we felt we'd get better performance. Thanks for any thoughts or direction. My thinking, now that I've written all of this out, is that maybe we need to write stored procedures where we pass our insert data as parameters, and call that from our other software, and have the stored procedure do the insert into "sales_data" then create the other table. Then, have a second stored procedure to insert new data into the salesdata_Company_Contract tables, where the table name is passed to the stored proc as a parameter, and again have that stored proc do the insert, then update the main sales_data table afterward. What approach would you take?

    Read the article

  • how to split a very large database on sql server

    - by ken jackson
    I have a 90 GB SQL Server database that I want to make more manageable. It stores stock data from 50+ different stocks from 2009 and 2010, and each stock is a separate table. Some tables have hundreds of millions of rows, and other have just a few million. What I want to do is somehow split the database, so that I don't have a single database file that is 90 GB. What I want is to be able to somehow magically split all the tables so that I can backup the 2009 data once and not have to keep on including it in the backup every time I backup the entire database, however, I would like the 2009 data to be included whenever I do a query. Is partitioning the database the way to go? Will it do the above for me, or will I need some other solution? I research partitioning, but I wasn't sure if that would solve all my problems. I wasn't able to find anything that would tell me whether or not it would migrate prexisting data, or whether it only worked for newly inserted data. Any help or pointers would be much appreciated. Thanks in advance, Ken

    Read the article

  • Serialize Forms into memory

    - by serhio
    Background: In a desktop MDI application we have a lot of forms. Task: Save the controls contents of closed forms(textbox texts, checkbox checks etc). Limitations: A saved form for (DB/Windows) user? For (DB/Windows) group of users? Both variants may be possible. Question: a) What is the best way? b) if I want do not use files, how to serialize the form into a MemoryStream and then recuperate it if the opening form was been once opened and serialized? StartingPoint: Implemented a form that implements ISerializable. Deserialize the form on opening, serialize onclosing: Public Sub GetObjectData(ByVal info As System.Runtime.Serialization.SerializationInfo, ByVal context As System.Runtime.Serialization.StreamingContext) Implements System.Runtime.Serialization.ISerializable.GetObjectData info.AddValue("tbxExportFolder", tbxExportFolder.Text, GetType(String)) info.AddValue("cbxCheckAliasUnicity", cbxCheckAliasUnicity.Checked, GetType(Boolean)) End Sub Public Sub New(ByVal info As SerializationInfo, ByVal context As StreamingContext) Me.New() Me.tbxExportFolder.Text = info.GetString("tbxExportFolder") Me.cbxCheckAliasUnicity.Checked = info.GetBoolean("cbxCheckAliasUnicity") End Sub Private Sub SerializeMe() Dim binFormatter As New Formatters.Binary.BinaryFormatter Dim fileStream As New FileStream(SerializedFilename, FileMode.Create) Try binFormatter.Serialize(fileStream, Me) Catch Throw Finally fileStream.Close() End Try End Sub Protected Overrides Sub OnClosing(ByVal e As System.ComponentModel.CancelEventArgs) SerializeMe() MyBase.OnClosing(e) End Sub

    Read the article

  • How to convert Unicode strings (\u00e2, etc) into NSString for display?

    - by karlbecker_com
    I am trying to support arbitrary unicode from a variety of international users. They have already put a bunch of data into sqlite databases on their iPhones, and now I want to capture the data into a database, then send it back to their device. Right now I am using a php page that is sending data back to from an internet mysql database. The data is saved in the mysql database properly, but when it's sent back it comes out as unicode text, such as Frank\u00e2\u0080\u0099s iPad instead of just Frank's iPad where the apostrophe should really be a curly apostrophe. The answer posted to another question indicates that there is no built-in Cocoa methods to convert the "\u00e2\u0080\u0099" portion of the unicode string from the webserver to an NSString object. Is this correct? That seems really surprising (and scarily disappointing), since Cocoa definitely allows input from many different Unicode characters, and I need to support any arbitrary language that I have never heard of, and all of the possible characters. I save them to and from the local sqlite database just fine now, but once I send it to a web server, then perhaps pull down different data, I want to ensure the data pulled from the web server is correctly formatted.

    Read the article

  • Finding the Column Index for a Specific Value

    - by Btibert3
    Hi All, I am having a brain cramp. Below is a toy dataset: df <- data.frame( id = 1:6, v1 = c("a", "a", "c", NA, "g", "h"), v2 = c("z", "y", "a", NA, "a", "g"), stringsAsFactors=F) I have a specific value that I want to find across a set of defined columns and I want to identify the position it is located in. The fields I am searching are characters and the trick is that the value I am looking for might not exist. In addition, null strings are also present in the dataset. Assuming I knew how to do this, the variable position indicates the values I would like returned. > df id v1 v2 position 1 1 a z 1 2 2 a y 1 3 3 c a 2 4 4 <NA> <NA> 99 5 5 g a 2 6 6 h g 99 The general rule is that I want to find the position of value "a", and if it is not located or if v1 is missing, then I want 99 returned. In this instance, I am searching across v1 and v2, but in reality, I have 10 different variables. It is also worth noting that the value I am searching for can only exist once across the 10 variables. What is the best way to generate this recode? Many thanks in advance.

    Read the article

  • Python raw strings and trailing back slashes.

    - by dash-tom-bang
    I ran across something once upon a time and wondered if it was a Python "bug" or at least a misfeature. I'm curious if anyone knows of any justifications for this behavior. I thought of it just now reading "Code Like a Pythonista," which has been enjoyable so far. I'm only familiar with the 2.x line of Python. Raw strings are strings that are prefixed with an r. This is great because I can use backslashes in regular expressions and I don't need to double everything everywhere. It's also handy for writing throwaway scripts on Windows, so I can use backslashes there also. (I know I can also use forward slashes, but throwaway scripts often contain content cut&pasted from elsewhere in Windows.) So great! Unless, of course, you really want your string to end with a backslash. There's no way to do that in a 'raw' string. In [9]: r'\n' Out[9]: '\\n' In [10]: r'abc\n' Out[10]: 'abc\\n' In [11]: r'abc\' ------------------------------------------------ File "<ipython console>", line 1 r'abc\' ^ SyntaxError: EOL while scanning string literal In [12]: r'abc\\' Out[12]: 'abc\\\\' So one slash before the closing quote is an error, but two slashes gives you two slashes! Certainly I'm not the only one that is bothered by this? Thoughts on why 'raw' strings are 'raw, except for slash-quote'? I mean, if I wanted to embed a single quote in there I'd just use double quotes around the string, and vice versa. If I wanted both, I'd just triple quote. If I really wanted three quotes in a row in a raw string, well, I guess I'd have to deal, but is this considered "proper behavior"?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >