Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 656/973 | < Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >

  • Android Development Tips, Tricks & Gotchas

    - by Mat Nadrofsky
    I'm starting down the road of Android Development. At this point I'm looking for some insight from other developers who have been doing 'droid development and have some experience to share with someone who is just starting out. This can be anything from API to AVM to IDE. Any unexpected things come up while building your apps? Any tips for project layout or organization that help facilitate the deployment process to the Android AppStore? Any patterns which specifically helped in a particular situation? Even links to great blogs or sample apps and resources beyond those which you can grab from Google Code would be appreciated.

    Read the article

  • Updating DataGridView from another Form using BackGroundWorker

    - by FezKazi
    Hi! I Have one Form (LoginForm) which has a Background Worker Monitoring a database for new entries. Then I have another Form (AdminForm) which I need to signal to update its datagrids whenever new data is available. I could poll the database in the AdminForm too, but considering that LoginForm is already doing some polling, which can be costly, I just want to signal AdminForm to update the DataGridViews with the new data. You may ask, why is LoginForm doing the polling when youre showing the stuff in the AdminForm? Well, LoginForm is actually processing the data and sending it over the Serial Port :$ heheh. I want it to be able to process the data without having an administrator logged in all the time.

    Read the article

  • iOS Efficiency File Saving Efficiency

    - by Guvvy Aba
    I was working on my iOS app and my goal is to save a file that I am receiving from the internet bit by bit. My current setup is that I have a NSMutableData object and I add a bit of data to it as I receive my file. After the last "packet" is received, I write the NSData to a file and the process is complete. I'm kind of worried that this isn't the ideal way to do it because of the limitations of RAM in a mobile device and it would be problematic to receive large files. My next thought was to implement a NSFileHandle so that as the file arrives, it would be saved to the disk, rather than the virtual memory. In terms of speed and efficiency, which method do you think will work decently on an iOS device. I am currently using the first, NSMutableData approach. Is it worth changing my app to use the NSFileHandle? Thanks in advance, Guvvy

    Read the article

  • Does Google's Geocoding API return results that are more accurate than Google Maps or the same?

    - by jacob501
    I am thinking about using python or C++ in conjunction with google's geocoding API. Since geocoding is the process of turning street addresses into coordinates, I was wondering how google does this. I am looking for something that will give me coordinates that are around 50 meters away from the entrance of the location at a specified address. There are a few problems with this when you use google maps however. If you aren't doing it manually, sometimes google maps will place a marker for an address just on the road and not over the place (especially for addresses in malls, places far off the road, etc). Does the geocoding api give you more accurate coordinates or does it simply copy the coordinates of what a google maps marker would give you? I hope this makes sense. Thanks.

    Read the article

  • Indy FTP, large files and NAT routers

    - by Lobuno
    Hello! I have been using Indy to transfers files via FTP for years now but have not been able to find a satisfactory solution for the following problem. When a user is uploading a large file, behind a router, sometimes the following happens: the file is uploaded OK, but under the mean time the command channel gets disconnected because of a timeout. Normally this doesn't happens with a direct connection to the server, because the server "knows" that a transfer is being taking place on the data channel. Some routers are not aware of this, though and the command channel is closed. Many programs send a NOOP command periodically to keep the command channel alive even if this is not part of the standard FTP specification. My question: how do I do that? Do I send the NOOP command in the OnWork event? Does this cause any collateral damage in some way, like, do I need to process some response? How do I best solve this problem?

    Read the article

  • Mobile phone - configuration via SMS

    - by vpdn
    In Germany, mobile carriers often provide a simple way to configure your mobile phone for MMS and GPRS: After keying in your phone number and device model on the carrier's website, you get a "configuration sms" sent to you. I'm trying to understand how that works from a technical standpoint. I have scanned through 3GPP TS 03.40 (http://www.3gpp.org/ftp/Specs/html-info/0340.htm), but haven't been able to find much. Also, the fact that one has to provide the phone model indicates that it is a provider specific thing and not standardized? Does anyone have any pointers for me? Also I'd be interested how the "internet-enabling" process looks like in other countries. Anyone care to share?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Asp.Net Program Architecture

    - by Pino
    I've just taken on a new Asp.Net MVC application and after opening it up I find the following, [Project].Web [Project].Models [Project].BLL [Project].DAL Now, something thats become clear is that there is the data has to do a hell of a lot before it makes it to the View (DatabaseDALRepoBLLConvertToModelControllerView). The DAL is Subsonic, the repositorys in the DAL return the subsonic entities to the BLL which process them does crazy things and converts them into a Model (From the .Models) sometimes with classes that look like this public DataModel GetDataModel(EntityObject Src) { var ReturnData = new DataModel(): ReturnData.ID = Src.ID; ReturnDate.Name = Src.Name; //etc etc } Now, the question is, "Is this complete overkill"? Ok the project is of a decent size and can only get bigger but is it worth carrying on with all this? I dont want to use AutoMapper as it just seems like it makes the complication worse. Can anyone shed any light on this?

    Read the article

  • Mapping table and a simple view with Fluent NHibernate

    - by adrin
    I have mapped a simple entity, let's say an invoice using Fluent NHibernate, everything works fine... after a while it turns out that very frequently i need to process 'sent invoices' (by sent invoices we mean all entities that fulfill invoice.sent==true condition)... is there a way to easily abstract 'sent invoices' in terms of my data access layer? I dont like the idea of having aforementioned condition repeated in half of my repository methods. I thought that using a simple filtering view would be optimal, but how could it be done? Maybe I am doing it terribly wrong and someone would help me realize it :)?

    Read the article

  • getting a blank data report vb6

    - by arvind
    Hi, I am new to vb6. I am working to create the invoice generation application. I am using data report to show the generated invoice. The step by step working of process is Entering the data in to Invoice and ItemsInvoice tables. Then getting the maxId using (Adodc) from the data base to show the last generated Invoice. Then passing the max Id as parameter to the data report which is showing the invoice according to the invoice id. It is working fine when I first time generate invoice. Now for 2nd invoice withou closing application I am getting a blank data report. For data report I am using dataenvironment. I am guessing the reason of blank data report is blank because there was no record for that Id. But actually the record is inserting in the database. Please help me.

    Read the article

  • How to escape HAML for Javascript in Sinatra

    - by viatropos
    I would like to return a list/combobox from an ajax request ("Which on of these do you like?" type thing). I would like to write that little snippet in HAML, which converts it to HTML, but when I do, the page goes blank. I'm assuming this is because the HTML isn't escaped. Is there a way to escape HAML so I can do $("#mydiv").html(response);? Here's the method: post "/something" do # process... haml :"partials/_select", :layout => false, :locals => {:collection => choices} end ... the haml template: %select - collection.each do |item| %option{:value => item.to_s}= item.to_s ... and the javascript: success: function(responseText, statusText, xhr, $form) { $(".dialog_content").append(responseText); } I have tried the sinatra_more plugin and the escape_javascript method, but there's problems with the haml buffer in sinatra. Any ideas?

    Read the article

  • Signup form using Braintree Transparent Redirect

    - by Robin Fisher
    Hi, I'm developing an application in Rails and want the user to be able to signup and provide their card details on one form. I'm using the Braintree API and their transparent redirect, which means that the form data is posted directly to Braintree. How can I store and later retrieve the non-payment related information provided by the user from that form e.g. account name, username? These values are not returned in the response provided by Braintree. If you look at the Basecamp signup process, this is the result I want to achieve. Thanks Robin

    Read the article

  • Avoid XmlDocument validating namespaces in C#

    - by Abbey Kingston
    Hello, I'm trying to find a way of indenting a HTML file, I've been using XMLDocument and just using a XmlTextWriter. However I am unable to format it correctly for HTML documents because it checks the doctype and tries to download it. Is there a "dumb" indenting mechanism that doesnt validate or check the document and does a best effort indentation? The files are 4-10Mb in size and they are autogenerated, we have to handle it internal - its fine, the user can wait, I just want to avoid forking to a new process etc. Essentially, right now I use a MemoryStream, XmlTextWriter and XmlDocument, once indented I read it back from the MemoryStream and return it as a string. Failures happen for XHTML documents and some HTML 4 documents because its trying to grab the dtds. I tried setting XmlResolver as null but to no avail :(

    Read the article

  • Test (with RSpec) a controller outside of a Rails environment

    - by ramon.tayag
    I'm creating a gem that will generate a controller for the Rails app that will use it. It's been a trial and error process for me when trying to test a controller. When testing models, it's been pretty easy, but when testing controllers, ActionController::TestUnit is not included (as described here). I've tried requiring it, and all similar sounding stuff in Rails but it hasn't worked. What would I need to require in the spec_helper to get the test to work? Thanks!

    Read the article

  • Big Data – Various Learning Resources – How to Start with Big Data? – Day 20 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned how to become a Data Scientist for Big Data. In this article we will go over various learning resources related to Big Data. In this series we have covered many of the most essential details about Big Data. At the beginning of this series, I have encouraged readers to send me questions. One of the most popular questions is - “I want to learn more about Big Data. Where can I learn it?” This is indeed a great question as there are plenty of resources out to learn about Big Data and it is indeed difficult to select on one resource to learn Big Data. Hence I decided to write here a few of the very important resources which are related to Big Data. Learn from Pluralsight Pluralsight is a global leader in high-quality online training for hardcore developers.  It has fantastic Big Data Courses and I started to learn about Big Data with the help of Pluralsight. Here are few of the courses which are directly related to Big Data. Big Data: The Big Picture Big Data Analytics with Tableau NoSQL: The Big Picture Understanding NoSQL Data Analysis Fundamentals with Tableau I encourage all of you start with this video course as they are fantastic fundamentals to learn Big Data. Learn from Apache Resources at Apache are single point the most authentic learning resources. If you want to learn fundamentals and go deep about every aspect of the Big Data, I believe you must understand various concepts in Apache’s library. I am pretty impressed with the documentation and I am personally referencing it every single day when I work with Big Data. I strongly encourage all of you to bookmark following all the links for authentic big data learning. Haddop - The Apache Hadoop® project develops open-source software for reliable, scalable, distributed computing. Ambari: A web-based tool for provisioning, managing, and monitoring Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heat maps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner. Avro: A data serialization system. Cassandra: A scalable multi-master database with no single points of failure. Chukwa: A data collection system for managing large distributed systems. HBase: A scalable, distributed database that supports structured data storage for large tables. Hive: A data warehouse infrastructure that provides data summarization and ad hoc querying. Mahout: A Scalable machine learning and data mining library. Pig: A high-level data-flow language and execution framework for parallel computation. ZooKeeper: A high-performance coordination service for distributed applications. Learn from Vendors One of the biggest issues with about learning Big Data is setting up the environment. Every Big Data vendor has different environment request and there are lots of things require to set up Big Data framework. Many of the users do not start with Big Data as they are afraid about the resources required to set up framework as well as a time commitment. Here Hortonworks have created fantastic learning environment. They have created Sandbox with everything one person needs to learn Big Data and also have provided excellent tutoring along with it. Sandbox comes with a dozen hands-on tutorial that will guide you through the basics of Hadoop as well it contains the Hortonworks Data Platform. I think Hortonworks did a fantastic job building this Sandbox and Tutorial. Though there are plenty of different Big Data Vendors I have decided to list only Hortonworks due to their unique setup. Please leave a comment if there are any other such platform to learn Big Data. I will include them over here as well. Learn from Books There are indeed few good books out there which one can refer to learn Big Data. Here are few good books which I have read. I will update the list as I will learn more. Ethics of Big Data Balancing Risk and Innovation Big Data for Dummies Head First Data Analysis: A Learner’s Guide to Big Numbers, Statistics, and Good Decisions If you search on Amazon there are millions of the books but I think above three books are a great set of books and it will give you great ideas about Big Data. Once you go through above books, you will have a clear idea about what is the next step you should follow in this series. You will be capable enough to make the right decision for yourself. Tomorrow In tomorrow’s blog post we will wrap up this series of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • A control that contains multiple duplicate properties causing deadlock issues on IIS

    - by heads5150
    I am trying to work out if the above case is true for our site. I've been told by my hosting provider that this fix (http://support.microsoft.com/kb/974165) has to applied to our server due to performance issues. It basically describes an issues where UI code like: <asp:gridview id="GridView1" runat="server" ... PageSize="100" PagerSettings-Mode="Numeric" PagerStyle-BorderStyle="None" PagerStyle-BorderColor="Navy" PagerStyle-HorizontalAlign="Right" PagerSettings-PageButtonCount="2" PagerSettings-Position="Bottom"> <PagerStyle HorizontalAlign="Left" BorderColor="Navy" BorderStyle="None"></PagerStyle> ... <PagerSettings PageButtonCount="2"></PagerSettings> ... </asp:gridview> causing the following warning on the server "ISAPI 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll' reported itself as unhealthy for the following reason: 'Deadlock detected'." Does anybody know of a way that I can detect this issue in the build process or the debugger? Any help would be much appreciate.

    Read the article

  • Enable Session state in sharePoint 2010

    - by Albert D. Kallal
    I setup a test box computer with server 2008 (standard edition, not R2 and not hyper-v editing). I then installed SharePoint 2010. I was amazed how easy the whole setup went (the prerequisites setup on the SharePoint disk made this process oh so easy – great install system). Really this was just so easy. This test box is being used for testing Access web services. I am able to well publish access applications to this test server and Access applications publish and run just fine on the web SharePoint site through an web browser. However, the only thing that does not work is when I launch a Access report. The error message I get back is This report failed to load because session state is not turned on. Here is a screen shot: I can’t seem to find the setting anywhere to turn session state on. Any hints or links on how to enable session state in SharePoint 2010 would be most appreciated.

    Read the article

  • What is your contribution to open source projects?

    - by Yuval A
    I was always wondering about this seemingly utopic world of open source. Assuming the vast majority of users here are professional software engineers which need some sort of income source, I assume most of us hold stable, money-making jobs. So who are the key players in the open source community? Who are the people which devote their precious time to these projects? What is their benefit? Are the majority just people who see a bug, fix it, submit, and forget about the project? Or are they people constantly involved in the process of building the product? How do you find yourself contributing to open source projects?

    Read the article

  • How to make it easy for users to install my software? Does the programming language matter?

    - by lala
    I'm a beginner to intermediate programmer and I've learned some java and C#. I want to start thinking about making some simple programs that I can release to the world. Just some basic stuff like calendar software that will probably be free. Users want the install process to be quick and easy. To install a java program, I have to tell them to have java installed. To install a C# program, I have to tell them to have .NET installed. I'm worried this might put off some potential users who just want to double click an exe file, choose a directory and be pretty much done. So, I guess this is an either/or two part question: 1) Is there a programming language that makes it easier to set up an installer without requiring users to have other stuff installed? or: 2) Is there some way to set up an installer that checks the system to see if it has java/.NET/whatever, and then includes java/.Net/whatever in the installation if it's not already there?

    Read the article

  • How do I reduce number of redundant requests with mod_perl properly?

    - by rassie
    In a fairly big legacy project, I've refactored several hairy modules into Moose classes. Each of these modules requires database access to (lazy) fetch its attributes. Since those objects are used pretty heavily, I want to reduce the number of redundant requests, for example for unchanged data. Now, how do I do that properly? I've got several alternatives: Implement caching in my Moose classes via a role to store them in memcached with expiration of 5-10 minutes (probably not too difficult, but tricky with lazy attributes) update: KiokuDB could probably help here, have to read up about attributes Migrate to DBIx::Class (needs to be done anyway) and implement caching on this level (DBIC will probably take most of the pain away just by itself) Somehow make my objects persist inside the mod_perl process (no clue how to do this :() How would you do this and what do you consider a sane way? Is caching data preferred on object or the ORM level?

    Read the article

  • Enumerating all open file handles and/or registry handles in Windows Mobile / Windows CE 5.x

    - by jdstroy
    Hi all, Is there a way to enumerate all open file handles and/or registry handles in Windows Mobile 5 / Windows CE 5.x? In particular, I'd like to get the handles for all processes in the system, and not just the ones for my application. This would be similar to the list of handles in Sysinternals's Process Explorer for Win32 or Sysinternals's handle.exe I anticipate that someone will ask "Is this absolutely necessary for your application?" My answer to that would be "I think so, unless there's a better way to get a list of all open file names and registry key names." The goal is to provide diagnostic information about an application that crashes and fails to uninstall properly, but that worked properly at one time on the same device. (I do not have debugging information for the buggy application.)

    Read the article

  • How to bootstrap NAnt environment from an existing solution (.sln)

    - by Ron Harlev
    I have a Visual Studio 2005 solution (.sln) with a mix of .NET and C++ projects. What is the best way to generate the .build file I will need to run my build process with NAnt. I'm new to using NAnt, and I'm not sure how to set it up. Will I have to update the .build file manually every time there is a new source file in any of the projects? Is there a tool that will generate the files for NAnt from the .sln and studio project files?

    Read the article

  • Finding specific pixel colors of a BitmapImage

    - by Andrew Shepherd
    I have a WPF BitmapImage which I loaded from a .JPG file, as follows: this.m_image1.Source = new BitmapImage(new Uri(path)); I want to query as to what the colour is at specific points. For example, what is the RGB value at pixel (65,32)? How do I go about this? I was taking this approach: ImageSource ims = m_image1.Source; BitmapImage bitmapImage = (BitmapImage)ims; int height = bitmapImage.PixelHeight; int width = bitmapImage.PixelWidth; int nStride = (bitmapImage.PixelWidth * bitmapImage.Format.BitsPerPixel + 7) / 8; byte[] pixelByteArray = new byte[bitmapImage.PixelHeight * nStride]; bitmapImage.CopyPixels(pixelByteArray, nStride, 0); Though I will confess there's a bit of monkey-see, monkey do going on with this code. Anyway, is there a straightforward way to process this array of bytes to convert to RGB values?

    Read the article

  • Build Procedure

    - by sarah xia
    Hi all, My company is putting auto build and deploy procedure in place. What we are doing now is checking out source code from svn and specify the source folder in Ant script. Is it the right way? Can we omit the exporting process and build directly from SVN? Another question is to do with versioning. At the moment, we are creating a tag whenever there is a release and then use the tag number to name the build product, which will be shipped to a client's site. I've done search on the Internet and here and it seems the correct way to name a product is like this: x.y.z.revision. However, our company is quite small and the client always want quick changes and releases. I would like to know what the drawbacks of only using tag number to name the product? And what would be the best approach for small companies like us? Thankyou, Sarah

    Read the article

< Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >