Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 773/1024 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • C# cross thread dialogue co-operation

    - by John Attridge
    K I am looking at a primarily single thread windows forms application in 3.0. Recently my boss had a progress dialogue added on a separate thread so the user would see some activity when the main thread went away and did some heavy duty work and locked out the GUI. The above works fine unless the user switches applications or minimizes as the progress form sits top most and will not disappear with the main application. This is not so bad if there are lots of little operations as the event structure of the main form catches up with its events when it gets time so minimized and active flags can be checked and thus the dialog thread can hide or show itself accordingly. But if a long running sql operation kicks off then no events fire. I have tried intercepting the WndProc command but this also appears queued when a long running sql operation is executing. I have also tried picking up the processes, finding the current app and checking various memory values isiconic and the like inside the progress thread but until the sql operation finishes none of these get updated. Removing the topmost causes the dialog to disappear when another app activates but if the main app is then brought back it does not appear again. So I need a way to find out if the other thread is minimized or no longer active that does not involve querying the actual thread as that locks until the sql operation finishes. Now I know that this is not the best way to write this and it would be better to have all the heavy processing on separate threads leaving the GUI free but as this is a huge ancient legacy app the time to re-write in that fashion will not be provided so I have to work with what I have got. Any help is appreciated

    Read the article

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Unit testing a 'legacy' WPF Application

    - by sc_ray
    The product I have been working on has been in development for the past six years. It started as a generic data entry portal into an insanely complex part WPF/part legacy application. The system has been developed for all these years without a single Unit test in its fold. Now, the point has been raised for a comprehensive unit testing framework. I have been recruited recently to work on this product and have been tasked to get the 'Testing' in order. Since the team that worked on the product for the last six years adopted 'Agile', the project lacks any documentation of the business rules or any design documents. I have been trying to write unit tests for some of the modules. But I am not sure what to Mock, how to setup my Test fixture and eventually what to Test for, since a casual glance of the methods does not reveal its intentions. Also, it has come to my attention that the code was not developed with a particular methodology in mind. Given the situation, I was wondering if the good people of Stackoverflow could provide me with some advise on how to salvage this situation. I have heard about the book 'Working with Legacy Code' that has something to say about this general situation but I was thinking about getting some pointers from individuals who have encountered similar situations within the technology stack(C#,VB,C++,.NET 3.5,WCF,SQL Server 2005).

    Read the article

  • Image Gurus: Optimize my Python PNG transparency function

    - by ozone
    I need to replace all the white(ish) pixels in a PNG image with alpha transparency. I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing. I found the excellent little pyPNG module and managed to knock up a little function that does what I need: make_transparent.py pseudo-code for the main loop would be something like: for each pixel: if pixel looks "quite white": set pixel values to transparent otherwise: keep existing pixel values and (assuming 8bit values) "quite white" would be: where each r,g,b value is greater than "240" AND each r,g,b value is within "20" of each other This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?) I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm. Thanks!

    Read the article

  • Why won't Internet Explorer (or Chrome) display my 'Loading...' gif but Firefox will?

    - by codeLes
    I have a page that fires several xmlHttp requests (synchronous, plain-vanilla javascript, I'd love to be using jquery thanks for mentioning that). I'm hiding/showing a div with a loading image based on starting/stopping the related javascript functions (at times I have a series of 3 xmlhttp request spawning functions nested). div = document.getElementById("loadingdiv"); if(div) { if(stillLoading) { div.style.visibility='visible'; div.style.display=''; } else { div.style.visibility='hidden'; div.style.display='none'; } } In Firefox this seems to work fine. The div displays and shows the gif for the required processing. In IE/Chrome however I get no such feedback. I am only able to prove that the div/image will even display by putting alert() methods in place with I call the above code, this stops the process and seems to give the browsers in question the window they need to render the dom change. I want IE/Chrome to work like it works in Firefox. What gives?

    Read the article

  • Stable/repeatable random sort (MySQL, Rails)

    - by Matt Rogish
    I'd like to paginate through a randomly sorted list of ActiveRecord models (rows from MySQL database). However, this randomization needs to persist on a per-session basis, so that other people that visit the website also receive a random, paginate-able list of records. Let's say there are enough entities (tens of thousands) that storing the randomly sorted ID values in either the session or a cookie is too large, so I must temporarily persist it in some other way (MySQL, file, etc.). Initially I thought I could create a function based on the session ID and the page ID (returning the object IDs for that page) however since the object ID values in MySQL are not sequential (there are gaps), that seemed to fall apart as I was poking at it. The nice thing is that it would require no/minimal storage but the downsides are that it is likely pretty complex to implement and probably CPU intensive. My feeling is I should create an intersection table, something like: random_sorts( sort_id, created_at, user_id NULL if guest) random_sort_items( sort_id, item_id, position ) And then simply store the 'sort_id' in the session. Then, I can paginate the random_sorts WHERE sort_id = n ORDER BY position LIMIT... as usual. Of course, I'd have to put some sort of a reaper in there to remove them after some period of inactivity (based on random_sorts.created_at). Unfortunately, I'd have to invalidate the sort as new objects were created (and/or old objects being removed, although deletion is very rare). And, as load increases the size/performance of this table (even properly indexed) drops. It seems like this ought to be a solved problem but I can't find any rails plugins that do this... Any ideas? Thanks!!

    Read the article

  • MonoTouch Load image in background

    - by user1058951
    I am having a problem trying to load an image and display it using System.Threading.Task My Code is as follows Task DownloadTask { get; set; } public string Instrument { get; set; } public PriceChartViewController(string Instrument) { this.Instrument = Instrument; DownloadTask = Task.Factory.StartNew(() => { }); } private void LoadChart(ChartType chartType) { NSData data = new NSData(); DownloadTask = DownloadTask.ContinueWith(prevTask => { try { UIApplication.SharedApplication.NetworkActivityIndicatorVisible = true; NSUrl nsUrl = new NSUrl(chartType.Uri(Instrument)); data = NSData.FromUrl(nsUrl); } finally { UIApplication.SharedApplication.NetworkActivityIndicatorVisible = false; } }); DownloadTask = DownloadTask.ContinueWith(t => { UIImage image = new UIImage(data); chartImageView = new UIImageView(image); chartImageView.ContentScaleFactor = 2f; View.AddSubview(chartImageView); this.Title = chartType.Title; }, CancellationToken.None, TaskContinuationOptions.OnlyOnRanToCompletion, TaskScheduler.FromCurrentSynchronizationContext()); } The second Continue with does not seem to be being called? Initially my code looked like the following without the background processing and it worked perfectly. private void oldLoadChart(ChartType chartType) { UIApplication.SharedApplication.NetworkActivityIndicatorVisible = true; NSUrl nsUrl = new NSUrl(chartType.Uri(Instrument)); NSData data = NSData.FromUrl(nsUrl); UIImage image = new UIImage(data); chartImageView = new UIImageView(image); chartImageView.ContentScaleFactor = 2f; View.AddSubview(chartImageView); this.Title = chartType.Title; UIApplication.SharedApplication.NetworkActivityIndicatorVisible = false; } Does anyone know what I am doing wrong?

    Read the article

  • $_FILES is null, $_POST is not null

    - by Cory Dee
    When I am going to upload a file, my $_POST variable knows the file name, but the $_FILES variable is null. I've used this code before, so I'm really stumped. Here's what I'm using for input: <label for="importFile">Attach Resume:</label> <input type="hidden" name="MAX_FILE_SIZE" value="10000000"> <input type="file" name="importFile" id="importFile" class="validate['required']"> And for processing: $uploaddir = "E:/Sites/OPL/2008/assets/apps/newjobs/resumes/"; $uploadfile = $uploaddir . time() . '-' . urlencode(basename($_FILES['importFile']['name'])); if (!move_uploaded_file($_FILES['importFile']['tmp_name'], $uploadfile)) { echo 'Error uploading file. Error number: ' . $_FILES['importFile']['error']; var_dump($_FILES['importFile']); echo $_POST['importFile']; die(); } Which is giving me this result: Error uploading file. Error number: NULL Maintaining The OPL Website.doc Any help would be greatly appreciated.

    Read the article

  • Python : How do you find the CPU consumption for a piece of code?

    - by Yugal Jindle
    Background: I have a django application, it works and responds pretty well on low load, but on high load like 100 users/sec, it consumes 100% CPU and then due to lack of CPU slows down. Problem : Profiling the application gives me time taken by functions. This time increases on high load. Time consumed may be due to complex calculation or for waiting for CPU. so, how to find the CPU cycles consumed by a piece of code ? Since, reducing the CPU consumption will increase the response time. I might have written extremely efficient code and need to add more CPU power OR I might have some stupid code taking the CPU and causing the slow down ? Any help is appreciated ! Update: I am using Jmeter to profile my webapp, it gives me a throughput of 2 requests/sec. [ 100 users] I get a average time of 36 seconds on 100 request vs 1.25 sec time on 1 request. More Info Configuration Nginx + Uwsgi with 4 workers No database used, using a responses from a REST API On 1st hit the response of REST API gets cached, therefore doesn't makes a difference. Using ujson for json parsing. Curious to Know: Python-Django is used by so many orgs for so many big sites, then there must be some high end Debug / Memory-CPU analysis tools. All those I found were casual snippets of code that perform profiling.

    Read the article

  • Call 32-bit or 64-bit program from bootloader

    - by user1002358
    There seems to be quite a lot of identical information on the Internet about writing the following 3 bootloaders: Infinite loop jmp $ Print a single character Print "Hello World". This is fantastic, and I've gone through these 3 variations with very little trouble. I'd like to write some 32- or 64-bit code in C and compile it, and call that code from the bootloader... basically a bootloader that, for example, sets the computer up to run some simple numerical simulation. I'll start by listing primes, for example, and then maybe some input/output from the user to maybe compute a Fourier transform. I don't know. I haven't found any information on how to do this, but I can already foresee some problems before I even begin. First of all, compiling a C program compiles it into one of several different files, depending on the target. For Windows, it's a PE file. For Linux, it's a .out file. These files are both quite different. In my instance, the target isn't Windows or Linux, it's just whatever I have written in the bootloader. Secondly, where would the actual code reside? The bootloader is exactly 512 bytes, but the program I write in C will certainly compile to something much larger. It will need to sit on my (virtual) hard disk, probably in some sort of file system (which I haven't even defined!) and I will need to load the information from this file into memory before I can even think about executing it. But from my understanding, all this is many, many orders of magnitude more complex than a 12-line "Hello World" bootloader. So my question is: How do I call a large 32- or 64-bit program (written in C/C++) from my 16-bit bootloader.

    Read the article

  • Any web services with APIs for outsourcing webapp transactional email? [closed]

    - by Tauren
    My webapp needs to send customized messages to members and I'm wondering if there is an inexpensive and easy to use web service that would meet my needs. The types of mail I will be sending include: New account activation email (sent ASAP) Daily status report of user's account (sent anytime) Event reminders (sent at specific time) Specifically, I would like the following features: RESTful API to add an email message into the send queue A way to add a priority to each message (account signup activations should be sent immediately, while a daily status report could be sent anytime each day) They manage the sending of mail and the processing of bounces Possibly, they manage opt-out/opt-in features They offer features such as DKIM, VERP, etc. If their service determines an address is undeliverable (via VERP or other means, a user unsubscribes, etc), they make a RESTful call to my web service to notify me Nice if they had some reporting features, WebBugs, link tracking, etc. What I am NOT looking for is an email marketing service that caters only to sending out copies of the same mail to masses of recipients. I need to send out unique and custom messages to individuals. I had a chat with MailChimp about using their services for sending these types of messages and they said their service does not support customized emails per recipient. Edit: I just discovered a service called JangoSMTP that appears to meet many of my requirements. It provides an API for sending mail, supports DKIM, bounce management, and even feedback loops. Unfortunately, their idea of inexpensive doesn't mesh with mine, as it would cost $180/mo to send a single daily message to my 1000 users.

    Read the article

  • What is better for a student programming in C++ to learn for writing GUI: C# vs QT?

    - by flashnik
    I'm a teacher(instructor) of CS in the university. The course is based on Cormen and Knuth and students program algorithms in C++. But sometimes it is good to show how an algorithm works or just a result of task through GUI. Also in my opinion it's very imporant to be able to write full programs. They will have courses concerning GUI but a three years, later, in fact, before graduatuion. I think that they should be able to write simple GUI applications earlier. So I want to teach them it. How do you think, what is more useful for them to learn: programming GUI with QT or writing GUI in C# and calling unmanaged C++ library? Update. For developing C++ applications students use MS Visual studio, so C# is already installed. But QT AFAIK also can be integrated into VS. I have following pros of C# (some were suggested there in answers): The need to make an additional layer. It's more work, but it forces you explicitly specify contract between GUI and processing data. The border between GUI and algorithms becomes very clear. It's more popular among employers. At least, in Russia where we live. It's rather common to write performance-critical algorithms in C++ and PInvoke them from well-looking C# application/ASP.Net website. Maybe it is not so widespread in the rest of the world but in Russia Windows is very popular, especially in companies and corporations due to some reasons, so most of b2b applications are Windows applications. Rapid development. It's much quicker to code in .Net then in C++ due to many reasons. And the con is that it's a new language with own specific for students. And the mess with invoking calls to library.

    Read the article

  • HTML columns or rows for form layout?

    - by Valera
    I'm building a bunch of forms that have labels and corresponding fields (input element or more complex elements). Labels go on the left, fields go on the right. Labels in a given form should all be a specific width so that the fields all line up vertically. There are two ways (maybe more?) of achieving this: Rows: Float each label and each field left. Put each label and field in a field-row div/container. Set label width to some specific number. With this approach labels on different forms will have different widths, because they'll depend on the width of the text in the longest label. Columns: Put all labels in one div/container that's floated left, put all fields in another floated left container with padding-left set. This way the labels and even the label container don't need to have their widths set, because the column layout and the padding-left will uniformly take care of vertically lining up all the fields. So approach #2 seems to be easier to implement (because the widths don't need to be set all the time), but I think it's also less object oriented, because a label and a field that goes with that label are not grouped together, as they are in approach #1. Also, if building forms dynamically, approach #2 doesn't work as well with functions like addRow(label, field), since it would have to know about the label and the field containers, instead of just creating/adding one field-row element. Which approach do you think is better? Is there another, better approach than these two?

    Read the article

  • OCR combined with font recognition?

    - by Adam
    I have a bold idea where a user could take an image like the following and in a few seconds of processing, be able to edit a document which looks roughly the same. The software would use WhatTheFont (or something similar) to recognize the fonts used, and OCR and other software to handle the font size, color, line-spacing, and of course the text content itself. In the case of the example image, there would be three separate "textboxes" produced, each starting at the upper left corner of the text, and extending as far to the bottom right as it could before running into another text box. So the user would then see something like this: (The rectangles are just used to show the boundaries of each textbox.) From here, the user would be able to edit the text in each of these boxes to create a new document. Of course there are tons of obvious uses for such an application, especially on a mobile phone with a built in camera. So my questions are the following: I doubt the answer is yes, but does anything do this already? If I'm going to try to build this, what should I write it in? Can I use Python? What would be the best OCR libraries to start with? Is there a service other than WhatTheFont for font recognition that has better API support? Anybody want to help me build it? :) etc. etc. Update: One thing I wanted to mention (but forgot) is I would also like the background to be preserved. In other words, if the example above had an image behind the text, I'd like the document to use that image with text removed. I know this complicates things a lot because that would require some image editing techniques too (something akin to Photoshop CS5' "content-aware fill"). But if we can solve diminished reality on iPhones, I think we can figure this out!

    Read the article

  • Which is faster: Appropriate data input or appropriate data structure?

    - by Anon
    I have a dataset whose columns look like this: Consumer ID | Product ID | Time Period | Product Score 1 | 1 | 1 | 2 2 | 1 | 2 | 3 and so on. As part of a program (written in C) I need to process the product scores given by all consumers for a particular product and time period combination for all possible combinations. Suppose that there are 3 products and 2 time periods. Then I need to process the product scores for all possible combinations as shown below: Product ID | Time Period 1 | 1 1 | 2 2 | 1 2 | 2 3 | 1 3 | 2 I will need to process the data along the above lines lots of times ( 10k) and the dataset is fairly large (e.g., 48k consumers, 100 products, 24 time periods etc). So speed is an issue. I came up with two ways to process the data and am wondering which is the faster approach or perhaps it does not matter much? (speed matters but not at the cost of undue maintenance/readability): Sort the data on product id and time period and then loop through the data to extract data for all possible combinations. Store the consumer ids of all consumers who provided product scores for a particular combination of product id and time period and process the data accordingly. Any thoughts? Any other way to speed up the processing? Thanks

    Read the article

  • Thread is being killed by the OS

    - by Or.Ron
    I'm currently programming an app that extracts frames from a movie clip. I designed it so that the extraction will be done on a separate thread to prevent the application from freezing. The extraction process itself is taking a lot of resources, but works fine when used in the simulator. However, there are problems when building it for the iPad. When I perform another action (I'm telling my AV player to play while I extract frames), the thread unexpectedly stops working, and I believe it's being killed. I assume it's becauase I'm using a lot of resources, but not entirely sure. Here are my questions: 1. How can I tell if/why my thread stopping? 2. If it's really from over processing what should I do? I really need this action to be implemented. Heres some code im using: To create the thread: [NSThread detachNewThreadSelector:@selector(startReading) toTarget:self withObject:nil]; I'll post any information you need, Thanks so much! Update I'm using GCD now and it populates the threads for me. However the OS still kills the threads. I know exactly when is it happening. when i tell my [AVplayer play]; it kills the thread. This issue is only happening in the actual iPad and not on the simulator

    Read the article

  • Are workflows good for web service business logic?

    - by JL
    I have a series of complex web services that are getting used in my SOA application. I am generally happy with the overall design of the application, but as the complexity grows, I was wondering if Windows Workflow might be the way to go. My motivations for this are that you can get a graphic representation of the applications functionality, so it would be easier to maintain the code by its business function, rather than what I have now ( a standard 3 tier class library structure). My concerns are: I would be inducing an abstraction in my code, and I don't want to spend time having to deal with possible WF quirks or bugs. I've never worked with WF, is it a solid technology? I don't want to hit any WF limitations that prevent me from developing my solution. Is a WF even the right solution for the task? Simply put I am considering writing my next web service in this app to call a WF, and in this work flow manage the tasks the web service needs to carry out. I think it will be much neater and easier to maintain than a regular c# class library (maintainable by namespaces, classes ). Do you think this is the right thing to do? I'm hoping for positive feedback on WF (.net 4), but brutal honestly at the end of the day would help more. Thanks

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • CSS - Positioning images next to text

    - by jpjoki
    Hi, I'm doing a site in which images need to presented next to textual content - a sort of pseudo two-column layout, as the images and text come from a single html source. I've found quite a simple way to do this by putting the images as their own paragraphs and floating them. Would there still be a more simpler way (in regards to html) to do this without these extra paragraphs and by only attributing extra css to images? If the floated image is in the same paragraph than the text, then paragraphs with and without images would be different in width. EDIT: Basically, I'm looking for as simple HTML markup as possible to position images like this. The CSS can be complex ;) CSS: p { width: 500px; } p.image { float: right; width: 900px; } Current HTML: <p class="image"><img src="image.jpg" /></p> <p>Some text here.</p> Is the above possible with this HTML? <p><img src="image.jpg" /></p>

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • Refactoring. Your way to reduce code complexity of big class with big methods

    - by Andrew Florko
    I have a legacy class that is rahter complex to maintain: class OldClass { method1(arg1, arg2) { ... 200 lines of code ... } method2(arg1) { ... 200 lines of code ... } ... method20(arg1, arg2, arg3) { ... 200 lines of code ... } } methods are huge, unstructured and repetitive (developer loved copy/paste aprroach). I want to split each method into 3-5 small functions, whith one pulic method and several helpers. What will you suggest? Several ideas come to my mind: Add several private helper methods to each method and join them in #region (straight-forward refactoring) Use Command pattern (one command class per OldClass method in a separate file). Create helper static class per method with one public method & several private helper methods. OldClass methods delegate implementation to appropriate static class (very similiar to commands). ? Thank you in advance!

    Read the article

  • Laravel check if id exists?

    - by devt204
    I've two columns in contents tables 1. id 2. content now this is what i'm trying to do Route::post('save', function() { $editor_content=Input::get('editor_content'); $rules = array('editor_content' => 'required'); $validator= Validator::make(Input::all(), $rules); if($validator->passes()) { //1. check if id is submitted? //2. if id exists update content table //3. else insert new content //create new instance $content= new Content; // insert the content to content column $content->content = $editor_content; //save the content $content->save(); // check if content has id $id=$content->id; return Response::json(array('success' => 'sucessfully saved', 'id' => $id)); } if($validator->fails()) { return $validator->messages() ; } }); i wanted to check if id has been already submit or checked i'm processing the request via ajax, and if id exists i wanted update the content column and if it doesn't i wanted to create new instance how do i do it ?

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >