Search Results

Search found 7708 results on 309 pages for 'excel 2007'.

Page 301/309 | < Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >

  • KISS: Simple C# application which communicates with a RESTful web service.

    - by Workshop Alex
    Following the KISS principle, I suddenly realised the following: In .NET, you can use the Entity Model Framework to wrap around a database. This model can be exposed as a web service through WCF. This web service would have a very standardized definition. A client application could be created which could consume any such RESTful web service. I don't want to re-invent the wheel and it wouldn't surprise me if someone has already done this, so my question is simple: Has anyone already created a simple (desktop, not web) client application that can consume a RESTful service that's based on the Entity Framework and which will allow the user to read and write data directly to this service? Otherwise, I'll just have to "invent" this myself. :-)Problem is, the database layer and RESTful service is already finished. The RESTful service will only stay in the project during it's development phase, since we can use the database-layer assembly directly from the web applications that are build around it. When the web application is deployed, the RESTful services are just kept out of the deployment. But the database has a lot of data to manage over nearly 50 tables. When developing against a local database, we can have straight access to the database so I wouldn't need this tool for this. When it's deployed, the web application would be the only way to access the data so I could not use this tool. But we're also having a test phase where the database is stored on another system outside the local domain and this database is not available for developers. Only administrators have direct access to this database, making tests a bit more complex. However, through the RESTful service, I can still access the data directly. Thus, when some test goes wrong, I can repair the data through this connection or just create a copy of the data for tests on my local system. There's plenty of other functionality and it's even possible to just open the URL to a table service straight in Excel or XMLSpy to see the contents. But when I want to write something back, I have to write special code to do just that. A generic tool that would allow me to access the data and modify it would be easier. Since it's a generic setup around the ADO.NET Data services, this should be reasonable easy too. Thus, I can do it but hoped someone else has already done something similar. But it appears that there's no such tool made yet...

    Read the article

  • Exporting a report from Crystal 8.5 causes the report to first refresh, then export, with unexpected

    - by LittleBobbyTables
    We have a VB6 application that can generate reports using the Crystal Reports 8.5 runtime. To generate one of the more complicated reports we have, the VB app does the following: Deletes records from a SQL table (we'll call it Foo) based on the session ID of the user Performs a select statement, and populates the Foo table with the contents of the select statement. Massages the data in Foo. Executes the report (we'll call it Bar). The Bar report uses the Foo table as part of some outer joins to get some descriptions. After the report is opened and populated, the code then deletes the records in Foo. If you ever look in Foo there will be no data since it is always purged at the end, but the Crystal Report will still have the data, since Foo wasn't cleared out until after the report ran. Most sites can export this report afterwards, to either PDF or Excel, with no issue. One site, however, has two servers in production where if you attempt to export the Bar report (doesn't matter what format it is exported to), the report will visibly refresh and then export the report in the requested format. This refresh, however, causes the exported data to be invalid because the report is still doing the outer joins to the Foo table, which is now empty. I'm at a total loss why the report refreshes before printing on these two servers. One server has Crystal Reports 8.5 installed on it as well as the Crystal Reports 8.5 runtime (so they can modify reports). The other server only has the Crystal Reports 8.5 runtime (so you can generate reports from the VB application, but can't modify them on that server). Both of the servers belong to a French site. Another support staff here said the issue sounded vaguely familiar to an issue a few years ago, and suggested re-registering DLLs. I have tried unregistering and re-registering the following DLLs out of frustration: Crystl32.ocx crxlat32.dll cpeau32.dll exportmodeller.dll crtslv.dll atl.dll Unregistering and re-registering the above DLLs does not fix the issue. If we take the problem report, and run it on any of our development or QA servers, we have no issues; the report does NOT refresh before exporting, and the data looks consistent. It seems like a server or regional setting may be causing this, but what could possibly cause the report to refresh before exporting on only two of our servers? The most obvious solution is to simply alter the code so the Foo table isn't purged after the report is run, only when the report is run, but this is a production issue, the customer wants a fix now, and there's quite a few hoops to jump through to make the change.

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • SimpleXML SOAP response Namespace issues

    - by Stu
    Hi. After spending SEVERAL frustrated hours on this I am asking for your help. I am trying to get the content of particular nodes from a SOAP response. The response is $XmlStr = <?xml version="1.0" encoding="UTF-8"?><env:Envelope xmlns:env="http://www.w3.org/2003/05/soap-envelope"<xmlns:ns1="http://soap.xxxxxx.co.uk/"><env:Body><ns1:PlaceOrderResponse><xxxxxOrderNumber></xxxxxOrderNumber><ErrorArray><Error><ErrorCode>24</ErrorCode><ErrorText>The+client+order+number+3002254+is+already+in+use</ErrorText></Error><Error><ErrorCode>1</ErrorCode><ErrorText>Aborting</ErrorText></Error></ErrorArray></ns1:PlaceOrderResponse></env:Body></env:Envelope> I am trying to get at the nodes and children of <ErrorArray. Because of the XML containing namespaces $XmlArray = new SimpleXMLElement($XmlStr); foreach ($XmlArray->env:Envelope->env:Body->ns1:PlaceOrderResponse->ErrorArray->Error as $Error) { echo $Error->ErrorCode."<br />";<br /> } doesn't work. I have read a number of articles such as http://www.sitepoint.com/blogs/2005/10/20/simplexml-and-namespaces/ htp://blog.stuartherbert.com/php/2007/01/07/using-simplexml-to-parse-rss-feeds/ and about 20 questions on this site which unfortunately are not helping. (the second link has htp:// as a newbie here I cannot post more than one link) Even writing, $XmlArray = new SimpleXMLElement($XmlStr); echo "<br /><br /><pre>\n"; print_r($XmlArray); echo "<pre><br /><br />\n"; gives SimpleXMLElement Object ( ) which makes me wonder if the soap response ($XmlStr) is actually a valid input for SimpleXMLElement. It seems that the line $XmlArray = new SimpleXMLElement($XmlStr); is not doing what I expect it to. Any help on how to get the nodes from the XML above would be very welcome. Obviously getting it to work (having a working example) is what I need in the short term, but if someone could help me understand what I am doing wrong would be better in the long term. Cheers. Stu

    Read the article

  • Function Point Analysis -- a seriously over-estimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Is there a way to split a widescreen monitor in to two or more virtual monitors?

    - by Mike Thompson
    Like most developers I have grown to love dual monitors. I won't go into all the reasons for their goodness; just take it as a given. However, they are not perfect. You can never seem to line them up "just right". You always end up with the monitors at slight funny angles. And of course the bezel always gets in the way. And this is with identical monitors. The problem is much worse with different monitors -- VMWare's multi monitor feature won't even work with monitors of differnt resolutions. When you use multiple monnitors, one of them becomes your primary monitor of focus. Your focus may flip from one monitor to the other, but at any point in time you are usually focusing on only one monitor. There are exceptions to this (WinDiff, Excel), but this is generally the case. I suggest that having a single large monitor with all the benefits of multiple smaller monitors would be a better solution. Wide screen monitors are fantastic, but it is hard to use all the space efficiently. If you are writing code you are generally working on the left-hand side of the window. If you maximize an editor on a wide-screen monitor the right-hand side of the window will be a sea of white. Programs like WinSplit Revolution will help to organise your windows, but this is really just addressing the symptom, not the problem. Even with WinSplit Revolution, when you maximise a window it will take up the whole screen. You can't lock a window into a specific section of the screen. This is where virtual monitors comes in. What would be really nice is a video driver that sits on top of the existing driver, but allows a single monitor to be virtualised into multiple monitors. Control Panel would see your single physical monitor as two or more virtual monitors. The software could even support a virtual bezel to emphasise what is happening, or you could opt for seamless mode. Programs like WinSplit Revolution and UltraMon would still work. This virtual video driver would allow you to slice & dice your physical monitor into as many virtual monitors as you want. Does anybody know if such software exists? If not, are there any budding Windows display driver guru's out there willing to take up the challenge? I am not after the myriad of virtual desktop/window manager programs that are available. I get frustrated with these programs. They seem good at first but they usually have some strange behaviour and don't work well with other programs (such as WinSplit Revolution). I want the real thing!

    Read the article

  • jQuery fadeIn fadeOut pause on hover

    - by theDawckta
    I have a little jQuery snippet that will fadeIn and fadeOut a group of divs over a select interval. I now need to pause this fadeIn fadeOut on hover, then resume on mouse out. Any help is appreciated. Here is the relevant code. The following is what is located in my body <div class="gcRotate"> <div class="gcRotateContent"> <div style="border: solid 2px black; text-align: center; width: 150px;"> This is first content <img src="http://pix.motivatedphotos.com/2008/6/16/633492359109161542-Skills.jpg" alt="Dude" /> </div> </div> <div class="gcRotateContent"> <div style="border: solid 2px black; text-align: center; width: 150px"> This is second content <img src="http://www.funnycorner.net/funny-pictures/5010/cheezburger-locats.jpg" alt="Dude" /> </div> </div> <div class="gcRotateContent"> <div style="border: solid 2px black; text-align: center; width: 150px"> This is third content <img src="http://icanhascheezburger.files.wordpress.com/2007/06/business.jpg" alt="Dude" /> </div> </div> </div> <div> This shouldn't move. </div> <script type="text/javascript"> function fadeContent() { $(".gcRotate .gcRotateContent:hidden:first").fadeIn(500).delay(2000).fadeOut(500, function() { $(this).appendTo($(this).parent()); fadeContent(); }); } $(".gcRotate").height(0); $(".gcRotateContent").each( function() { if ($(".gcRotate").height() < $(this).height()) { $(".gcRotate").height($(this).height()); } } ); $(".gcRotateContent").each(function() { $(this).css("display", "none") }); fadeContent(); </script>

    Read the article

  • Zend Table Relationship Modeling with Composite Key

    - by emeraldjava
    I have a table with a composite primary key using four columns. mysql> describe leaguesummary; +------------------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------------+------------------+------+-----+---------+----------------+ | leagueid | int(10) unsigned | NO | PRI | NULL | auto_increment | | leaguetype | enum('I','T') | NO | PRI | NULL | | | leagueparticipantid | int(10) unsigned | NO | PRI | NULL | | | leaguestandard | int(10) unsigned | NO | | NULL | | | leaguedivision | varchar(5) | NO | PRI | NULL | | | leagueposition | int(10) unsigned | NO | | NULL | | I have the league object modelled as so (all plain enough mappings) <?php class Model_DbTable_League extends Zend_Db_Table_Abstract { protected $_name = 'league'; protected $_primary = 'id'; protected $_dependentTables = array('Model_DbTable_LeagueSummary'); And I've started like this on the new model class. I've mapped a simple reference map which returns all rows linked to the league id. // http://files.zend.com/help/Zend-Framework/zend.db.table.relationships.html // http://naneau.nl/2007/04/21/a-zend-framework-tutorial-part-one/ class Model_DbTable_LeagueSummary extends Zend_Db_Table_Abstract { protected $_name = "leaguesummary"; protected $_primary = array('leagueid', 'leaguetype','leagueparticipantid','leaguedivision'); protected $_referenceMap = array( 'Summary' => array( 'columns' => array('leagueid'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array('id') ), ..... ); } ?> The simple case works when called from my controller public function listAction() { // action body $leagueTable = new Model_DbTable_League(); $this->view->leagues = $leagueTable->getLeagues(); $league = $leagueTable->getLeague(6); // work $summary = $league->findDependentRowset('Model_DbTable_LeagueSummary','Summary'); Zend_Debug::dump($summary,"",true); I'm not sure how i can define extra _referenceMap keys which will take extra contraint ket values. I would like to be able to define a set called 'MenA' in which the type and division values are hardcoded, and the league id is taken from the initial rowset. 'MenA' =>array( 'columns' => array('leagueid','leaguetype','leaguedivision'), 'refTableClass' => 'Model_DbTable_League', 'refColumns' => array("id","I","A") ) Is this style of mapping possible ie hardcoding the values into the 'refColumns'. The second crazy idea i had was to pass the variable values in as part of the third param of the findDependentRowset() method. $menA = $league->findDependentRowset('Model_DbTable_LeagueSummary','MenA',array("I","A")); Any suggestions on how I might use the Zend DB Table Relationship mapping correctly to do this would be appreciated. I'm not interested in the plain, old and ugly $db-select(a,b,c)-where(..) style solution.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • usercontrol hosted in IE renders as a textbox

    - by coxymla
    On my ongoing saga to mirror the hosting of a legacy app on a clean box, I've hit my next snag. One page relies on a big .NET UserControl that on the new machine renders only as a big, greyed out textarea (greyed out vertical scrollbar on the right hand edge. Inspecting the source shows the expected object tag.) This is particularly tricky because nobody seems to know much about hosted UserControls and all the discussions data back to 2002-2004. The page is quite simple: <%@ Page language="c#" Codebehind="DataExport.aspx.cs" AutoEventWireup="false" Inherits="yyyyy.Web.DataExport" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <html> <head> <title>DataExport</title> <link rel="Configuration" href="/xxxxx/yyyyy/DataExport.config"> </head> <body style="margin:0px;padding:0px;overflow:hidden"> <OBJECT id="DataExport" style="WIDTH: 100%; HEIGHT: 100%; position:absolute; left: 0px; top:0px" classid="yyyyy.Common.dll#yyyyy.Controls.DataExport" VIEWASTEXT> </OBJECT> </body> </html> The config file referenced: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="yyyyy"> <section name="dataExport" type="yyyyy.Controls.DataExportSectionHandler,yyyyy.Common" /> </sectionGroup> </configSections> <yyyyy> <dataExport> <layoutFile>http://vm2/xxxxx/yyyyy/layout.xml</layoutFile> <webServiceUrl>http://vm2/xxxxx/yyyyy/services/yyyyy.asmx</webServiceUrl> </dataExport> </yyyyy> </configuration> What I've checked: Security permissions should be OK, the site is trusted and adding a URL exception to grant FullTrust doesn't change anything. Config file is acessible over the web, layout.xml is accessible, ASMX shows the expected command list Machine.config grants GET permission for the usercontrol.config file. What perhaps looks fishy to me: The DataExport UserControl references Aspose.Excel to generate the spreadsheets it exports. When I navigate to the page and get a blank textbox, then run gacutil /ldl, nothing is in the local download cache. On the working machine, running the same command after viewing the page shows a laundry list of DLLs including the control DLL and the Aspose DLL.

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Django: Grouping by Dates and Servers

    - by TheLizardKing
    So I am trying to emulate google app's status page: http://www.google.com/appsstatus#hl=en but for backups for our own servers. Instead of service names on the left it'll be server names but the dates and hopefully the pagination will be there too. My models look incredibly similar to this: from django.db import models STATUS_CHOICES = ( ('UN', 'Unknown'), ('NI', 'No Issue'), ('IS', 'Issue'), ('NR', 'Not Running'), ) class Server(models.Model): name = models.CharField(max_length=32) def __unicode__(self): return self.name class Backup(models.Model): server = models.ForeignKey(Server) created = models.DateField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) status = models.CharField(max_length=2, choices=STATUS_CHOICES, default='UN') issue = models.TextField(blank=True) def __unicode__(self): return u'%s: %s' % (self.server, self.get_status_display()) My issue is that I am having a hell of a time displaying the information I need. Everyday a little after midnight a cron job will run and add a row for each server for that day, defaulting on status unknown (UN). My backups.html: {% extends "base.html" %} {% block content %} <table> <tr> <th>Name</th> {% for server in servers %} <th>{{ created }}</th> </tr> <tr> <td>{{ server.name }}</td> {% for backup in server.backup_set.all %} <td>{{ backup.get_status_display }}</td> {% endfor %} </tr> {% endfor %} </table> {% endblock content %} This actually works but I do not know how to get the dates to show. Obviously {{ created }} doesn't do anything but the servers don't have create dates. Backups do and because it's a cron job there should only be X number of rows with any particular date (depending on how many servers we are following for that day). Summary I want to create a table, X being server names, Y being dates starting at today while all the cells being the status of a backup. The above model and template should hopefully give you an idea what my thought process but I am willing to alter anything. Basically I am create a fancy excel spreadsheet.

    Read the article

  • How to Load Dependent Files on Demand + Check if They're Loaded or Not?

    - by br4inwash3r
    I'm trying to implement an assets/dependency loader that i've found from an old article at 24Ways.org. most of you might be familiar with it. it's from this article by Christian Heilmann: http://24ways.org/2007/keeping-javascript-dependencies-at-bay i've modified the script to load CSS files as well. and it's now quite close to what i want. but i still need to do some checking to see wether an asset have been completely loaded or not. just wondering if you guys have any ideas :) here's what my script currently looked like: var assetLoader = { assets: { products: { js: 'products.js', css: 'products.css', loaded: false }, articles: { js: 'articles.js', css: 'articles.css', loaded: false }, [...] cycle: { js: 'jquery.cycle.min.js', loaded: false }, swfobject: { js: 'jquery.swfobject.min.js', loaded: false } }, add: function(asset) { var comp = assetLoader.assets[asset]; var path = '/path/to/assets/'; if (comp && comp.loaded == false) { if (comp.js) { // load js var js = document.createElement('script'); js.src = path + 'js/' + comp.js; js.type = 'text/javascript'; js.charset = 'utf-8'; // append to document document.getElementsByTagName('body')[0].appendChild(js); } if (comp.css) { // load css var css = document.createElement('link'); css.rel = 'stylesheet'; css.href = path + 'css/' + comp.css; css.type = 'text/css'; css.media = 'screen, projection'; css.charset = 'utf-8'; // append to document document.getElementsByTagName('head')[0].appendChild(css); } } }, check: function(asset) { assetLoader.assets[asset].loaded = true; } } Christian explains this method in his article in great detail. I don't want to confuse you guys anymore with my bad english :P and here's an example of how i run the script: ... // load jquery cycle plugin if (page=='tvc' || page=='products') { if (!assetLoader.assets.cycle.loaded) { assetLoader.add('cycle'); } } // load products page assets if (!assetLoader.assets.products.loaded) { assetLoader.add('products'); } ... this kind of approach is very problematic though. coz assets loads asynchronously, which means some of the code inside products.js that depends on jquery.cycle.js might continue running before jquery.cycle.js is even loaded resulting in errors. while i'm quite aware that scripts can be attached with an onload event, i'm just not really sure how to implement it to my script. anyone care to help me? please... :P

    Read the article

  • WPF: Improving Performance for Running on Older PCs

    - by Phil Sandler
    So, I'm building a WPF app and did a test deployment today, and found that it performed pretty poorly. I was surprised, as we are really not doing much in the way of visual effects or animations. I deployed on two machines: the fastest and the slowest that will need to run the application (the slowest PC has an Intel Celeron 1.80GHz with 2GB RAM). The application ran pretty well on the faster machine, but was choppy on the slower machine. And when I say "choppy", I mean the cursor jumped even just passing it over any open window of the app that had focus. I opened the Task Manager Performance window, and could see that the CPU usage jumped whenever the app had focus and the cursor was moving over it. If I gave focus to another (e.g. Excel), the CPU usage went back down after a second. This happened on both machines, but the choppiness was only noticeable on the slower machine. I had very limited time to tinker on the deployment machines, so didn't do a lot of detailed testing. The app runs fine on my development machine, but I also see the CPU spiking up to 10% there, just running the cursor over the window. I downloaded the WPF performance tool from MS and have been tinkering with it (on my dev machine). The docs say this about the "Frame Rate" metric in the Perforator tool: For applications without animation, this value should be near 0. The app is not doing any heavy animation, but the frame rate stays near 50 when the cursor is over any window. The screens I tested on have column headers in a grid that "highlight" and buttons that change color and appearance when scrolled over. Even moving the mouse on blank areas of the windows cause the same Frame rate and CPU usage (doesn't seem to be related to these minor animations). (Also, I am unable to figure out how to get anything but the two default tools--Perforator and Visual Profiler--installed into the WPF performance tool. That is probably a separate question). I also have Redgate's profiling tool, but I'm not sure if that can shed any light on rendering performance. So, I realize this is not an easy thing to troubleshoot without specifics or sample code (which I can't post). My questions are: What are some general things to look for (or avoid) in the code to improve performance? What steps can I take using the WPF performance tool to narrow down the problem? Is the PC spec listed above (Intel Celeron 1.80GHz with 2GB RAM) too slow to be running even vanilla WPF applications?

    Read the article

  • How to query data from a password protected https website

    - by Addie
    I'd like my application to query a csv file from a secure website. I have no experience with web programming so I'd appreciate detailed instructions. Currently I have the user login to the site, manually query the csv, and have my application load the file locally. I'd like to automate this by having the user enter his login information, authenticating him on the website, and querying the data. The application is written in C# .NET. The url of the site is: https://www2.emidas.com/default.asp. I've tested the following code already and am able to access the file once the user has already authenticated himself and created a manual query. System.Net.WebClient Client = new WebClient(); Stream strm = Client.OpenRead("https://www3.emidas.com/users/<username>/file.csv"); Here is the request sent to the site for authentication. I've angle bracketed the real userid and password. POST /pwdVal.asp HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, application/x-shockwave-flash, */* User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; Tablet PC 2.0; OfficeLiveConnector.1.4; OfficeLivePatch.1.3; .NET4.0C; .NET4.0E) Content-Type: application/x-www-form-urlencoded Accept-Encoding: gzip, deflate Cookie: ASPSESSIONID<unsure if this data contained password info so removed>; ClientId=<username> Host: www3.emidas.com Content-Length: 36 Connection: Keep-Alive Cache-Control: no-cache Accept-Language: en-US client_id=<username>&password=<password>

    Read the article

  • Parsing CSV File to MySQL DB in PHP

    - by Austin
    I have a some 350-lined CSV File with all sorts of vendors that fall into Clothes, Tools, Entertainment, etc.. categories. Using the following code I have been able to print out my CSV File. <?php $fp = fopen('promo_catalog_expanded.csv', 'r'); echo '<tr><td>'; echo implode('</td><td>', fgetcsv($fp, 4096, ',')); echo '</td></tr>'; while(!feof($fp)) { list($cat, $var, $name, $var2, $web, $var3, $phone,$var4, $kw,$var5, $desc) = fgetcsv($fp, 4096); echo '<tr><td>'; echo $cat. '</td><td>' . $name . '</td><td><a href="http://www.' . $web .'" target="_blank">' .$web.'</a></td><td>'.$phone.'</td><td>'.$kw.'</td><td>'.$desc.'</td>' ; echo '</td></tr>'; } fclose($file_handle); show_source(__FILE__); ?> First thing you will probably notice is the extraneous vars within the list(). this is because of how the excel spreadsheet/csv file: Category,,Company Name,,Website,,Phone,,Keywords,,Description ,,,,,,,,,, Clothes,,4imprint,,4imprint.com,,877-466-7746,,"polos, jackets, coats, workwear, sweatshirts, hoodies, long sleeve, pullovers, t-shirts, tees, tshirts,",,An embroidery and apparel company based in Wisconsin. ,,Apollo Embroidery,,apolloemb.com,,1-800-982-2146,,"hats, caps, headwear, bags, totes, backpacks, blankets, embroidery",,An embroidery sales company based in California. One thing to note is that the last line starts with two commas as it is also listed within "Clothes" category. My concern is that I am going about the CSV output wrong. Should I be using a foreach loop instead of this list way? Should I first get rid of any unnecessary blank columns? Please advise any flaws you may find, improvements I can use so I can be ready to import this data to a MySQL DB.

    Read the article

  • WPF DataGrid, Help with Binding to a List<X> where each X has a Dictionary<Y,object> property.

    - by panamack
    I'm building an application which helps someone manage an event and works with data originating from Excel. I want to use the WPF Toolkit DataGrid to display the incoming data but can't guarantee how many Columns there are going to be or what information they will contain. I'd like to have an Info class that stores column information and have each Visitor at my Event own a Dictionary that uses shared references to Info objects for the keys. Here's the general gist: public class Info{ public string Name{get;set;} public int InfoType{get;set;} } public class Visitor{ public Dictionary<Info,object> VisitorInfo {get;set;} } public class Event{ public List<Visitor> Visitors{get;set;} public Event(){ Info i1 = new Info(){ Name = "ID", InfoType = 0};// type 0 for an id Info i2 = new Info(){ Name = "Name", InfoType = 1};// type 1 for a string Info i3 = new Info(){ Name = "City", InfoType = 1}; Visitor v1 = new Visitor(); v1.VisitorInfo.Add(i1, 0); v1.VisitorInfo.Add(i2, "Foo Harris"); v1.VisitorInfo.Add(i3, "Barsville"); Visitor v2 = new Visitor(); ... this.Visitors.Add(v1); this.Visitors.Add(v2); ... } } XAML: <!-- Window1.xaml --> ... <local:Event x:Key="MyEvent"/> ... <wpftk:DataGrid DataContext="{StaticResource MyEvent}" ItemsSource="{Binding Path=Visitors}" /> Disappointingly, DataGrid just sees a collection of Visitors each having a VisitorInfo property and displays one column called VisitorInfo with the string "(Collection)" once for each Visitor. As a workaround I've created a ListTVisitorToDataTableConverter that maps Infos to DataColumns and Visitors to DataRows and used it like this: <wpftk:DataGrid DataContext="{StaticResource Event}" ItemsSource{Binding Path=Visitors, Converter={StaticResource MySmellyListTVisitorToDataTableConverter}}" /> I don't think this is good though, I haven't started trying to convert back yet which I guess I'll need to do if I want to be able to edit any data! How can I do better? Thanks.

    Read the article

  • Sync services not actually syncing

    - by Paul Mrozowski
    I'm attempting to sync a SQL Server CE 3.5 database with a SQL Server 2008 database using MS Sync Services. I am using VS 2008. I created a Local Database Cache, connected it with SQL Server 2008 and picked the tables I wanted to sync. I selected SQL Server Tracking. It modified the database for change tracking and created a local copy (SDF) of the data. I need two way syncing so I created a partial class for the sync agent and added code into the OnInitialized() to set the SyncDirection for the tables to Bidirectional. I've walked through with the debugger and this code runs. Then I created another partial class for cache server sync provider and added an event handler into the OnInitialized() to hook into the ApplyChangeFailed event. This code also works OK - my code runs when there is a conflict. Finally, I manually made some changes to the server data to test syncing. I use this code to fire off a sync: var agent = new FSEMobileCacheSyncAgent(); var syncStats = agent.Synchronize(); syncStats seems to show the count of the # of changes I made on the server and shows that they were applied. However, when I open the local SDF file none of the changes are there. I basically followed the instructions I found here: http://msdn.microsoft.com/en-us/library/cc761546%28SQL.105%29.aspx and here: http://keithelder.net/blog/archive/2007/09/23/Sync-Services-for-SQL-Server-Compact-Edition-3.5-in-Visual.aspx It seems like this should "just work" at this point, but the changes made on the server aren't in the local SDF file. I guess I'm missing something but I'm just not seeing it right now. I thought this might be because I appeared to be using version 1 of Sync Services so I removed the references to Microsoft.Synchronization.* assemblies, installed the Sync framework 2.0 and added the new version of the assemblies to the project. That hasn't made any difference. Ideas? Edit: I wanted to enable tracing to see if I could track this down but the only way to do that is through a WinForms app since it requires entries in the app.config file (my original project was a class library). I created a WinForms project and recreated everything and suddenly everything is working. So apparently this requires a WinForm project for some reason? This isn't really how I planned on using this - I had hoped to kick off syncing through another non-.NET application and provide the UI there so the experience was a bit more seemless to the end user. If I can't do that, that's OK, but I'd really like to know if/how to make this work as a class library project instead.

    Read the article

  • What is GC holes?

    - by tianyi
    I wrote a long TCP connection socket server in C#. Spike in memory in my server happens. I used dotNet Memory Profiler(a tool) to detect where the memory leaks. Memory Profiler indicates the private heap is huge, and the memory is something like below(the number is not real,what I want to show is the GC0 and GC2's Holes are very very huge, the data size is normal): Managed heaps - 1,500,000KB Normal heap - 1400,000KB Generation #0 - 600,000KB Data - 100,000KB "Holes" - 500,000KB Generation #1 - xxKB Data - 0KB "Holes" - xKB Generation #2 - xxxxxxxxxxxxxKB Data - 100,000KB "Holes" - 700,000KB Large heap - 131072KB Large heap - 83KB Overhead/unused - 130989KB Overhead - 0KB Howerver, what is GC hole? I read an article about the hole: http://kaushalp.blogspot.com/2007/04/what-is-gc-hole-and-how-to-create-gc.html The author said : The code snippet below is the simplest way to introduce a GC hole into the system. //OBJECTREF is a typedef for Object*. { PointerTable *pTBL = o_pObjectClass->GetPointerTable(); OBJECTREF aObj = AllocateObjectMemory(pTBL); OBJECTREF bObj = AllocateObjectMemory(pTBL); //WRONG!!! “aObj” may point to garbage if the second //“AllocateObjectMemory” triggered a GC. DoSomething (aOb, bObj); } All it does is allocate two managed objects, and then does something with them both. This code compiles fine, and if you run simple pre-checkin tests, it will probably “work.” But this code will crash eventually. Why? If the second call to “AllocateObjectMemory” triggers a GC, that GC discards the object instance you just assigned to “aObj”. This code, like all C++ code inside the CLR, is compiled by a non-managed compiler and the GC cannot know that “aObj” holds a root reference to an object you want kept live. ======================================================================== I can't understand what he explained. Does the sample mean aObj becomes a wild pointer after GC? Is it mean { aObj = (*aObj)malloc(sizeof(object)); free(aObj); function(aObj);? } ? I hope somebody can explain it.

    Read the article

  • What can cause System.Move to occasionaly give wrong results?

    - by Fredrik Loftheim
    The last few days we have had some strange problems with our database components developed by a third party. There has been no changes to these components for months. The code that HAS changed the last few days is our own code and we have also updated our gui-components developed by another third party. After debugging I have found that a call to System.Move in one of the database component procedures occationaly gives wrong results! Please take a look at the code below from the database components and read my comments. How can this inconsistent behaviour happen? Can anyone give me an idea of how to procede to find the cause of this inconsistent behaviour? NB! I dont think there is anything wrong with THIS code, it is only shown to explain the problem "symptoms". My guess is that there is some sort of memory corruption or something, caused by our code or the updated gui-component-code. Edit: Take a look at the blogpost linked below. It seems that it could be related to my problem. At least as I read it it confirms that System.Move can give wrong results: http://blog.excastle.com/2007/08/28/delphi-bug-of-the-day-fpu-stack-leak/ Procedure InternalDescribe; var cbufl: sb4; //sb4=LongInt cbuf: array[0..30] of char; cbufp: PChar; //.... begin //..Some code repeat //...Some code to initialize cbufp and cbufl //On the 15. iteration the values immediately Before Move are always these: //cbufp = 'STDPRODUCTSTOREDELEMENTSCOUNT' //cbuf = ('S', 'T', 'A', 'T', 'U', 'S', #0, 'E', 'V', 'A', 'R', 'R', 'E', 'C', 'I', 'D', #0, 'D', 'U', 'C', 'T', 'I', 'D', #0, #0, #0, #0, #0, #0, #0, #0) //cbufl = 29 Move(cbufp^, cbuf, cbufl); //Values immediately After Move should then be: //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', 'E', 'L', 'E', 'M', 'E', 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) //But sometimes this Move results in this value( 1 in 5..15 times): //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', #0, #0, #0, #0, #0, 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) } until SomeCondition; //...Some more code end;

    Read the article

  • Truth tables in code? How to structure state machine?

    - by HanClinto
    I have a (somewhat) large truth table / state machine that I need to implement in my code (embedded C). I anticipate the behavior specification of this state machine to change in the future, and so I'd like to keep this easily modifiable in the future. My truth table has 4 inputs and 4 outputs. I have it all in an Excel spreadsheet, and if I could just paste that into my code with a little formatting, that would be ideal. I was thinking I would like to access my truth table like so: u8 newState[] = decisionTable[input1][input2][input3][input4]; And then I could access the output values with: setOutputPin( LINE_0, newState[0] ); setOutputPin( LINE_1, newState[1] ); setOutputPin( LINE_2, newState[2] ); setOutputPin( LINE_3, newState[3] ); But in order to get that, it looks like I would have to do a fairly confusing table like so: static u8 decisionTable[][][][][] = {{{{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}, {{ 0, 0, 0, 0 }, { 0, 0, 0, 0 }}}, {{{ 0, 0, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}, {{{{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}, {{{ 0, 1, 1, 1 }, { 0, 1, 1, 1 }}, {{ 0, 1, 0, 1 }, { 1, 1, 1, 1 }}}}; Those nested brackets can be somewhat confusing -- does anyone have a better idea for how I can keep a pretty looking table in my code? Thanks! Edit based on HUAGHAGUAH's answer: Using an amalgamation of everyone's input (thanks -- I wish I could "accept" 3 or 4 of these answers), I think I'm going to try it as a two dimensional array. I'll index into my array using a small bit-shifting macro: #define SM_INPUTS( in0, in1, in2, in3 ) ((in0 << 0) | (in1 << 1) | (in2 << 2) | (in3 << 3)) And that will let my truth table array look like this: static u8 decisionTable[][] = { { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 0, 0 }, { 0, 0, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 1, 1 }, { 0, 1, 0, 1 }, { 1, 1, 1, 1 }}; And I can then access my truth table like so: decisionTable[ SM_INPUTS( line1, line2, line3, line4 ) ] I'll give that a shot and see how it works out. I'll also be replacing the 0's and 1's with more helpful #defines that express what each state means, along with /**/ comments that explain the inputs for each line of outputs. Thanks for the help, everyone!

    Read the article

  • VBScript Issue Help Required.

    - by MalsiaPro
    I need a script that can run and pull information from any drive on a Windows operating system (Windows Server 2003), listing all files and folders which contain the following fields: The server is quite big and is within our domain. The required information is: Full file path (e.g. C:\Documents and Settings\user\My Documents\testPage.doc) File type (e.g. word document, spreadsheet, database etc) Size When Created When last modified When last accessed Also the script will need to convert that data to a CSV file, which later on I can modify and process in Excel. I can imagine that this data will be huge but I still need it. I am logged in as an administrator on the server and the script will need to also process protected files. As in previous posts I have read that the script will stop if such files are processed. I need to make sure that not a single file is skipped. Please note I have asked this question before but still have not got a working script. This is the script I got so far, file Test.vbs: Set objFS=CreateObject("Scripting.FileSystemObject") WScript.Echo Chr(34) & "Full Path" &_ Chr(34) & "," & Chr(34) & "File Size" &_ Chr(34) & "," & Chr(34) & "File Date modified" &_ Chr(34) & "," & Chr(34) & "File Date Created" &_ Chr(34) & "," & Chr(34) & "File Date Accessed" & Chr(34) Set objArgs = WScript.Arguments strFolder = objArgs(0) Set objFolder = objFS.GetFolder(strFolder) Go (objFolder) Sub Go(objDIR) If objDIR <> "\System Volume Information" Then For Each eFolder in objDIR.SubFolders Go eFolder Next End If For Each strFile In objDIR.Files WScript.Echo Chr(34) & strFile.Path & Chr(34) & "," &_ Chr(34) & strFile.Size & Chr(34) & "," &_ Chr(34) & strFile.DateLastModified & Chr(34) & "," &_ Chr(34) & strFile.DateCreated & Chr(34) & "," &_ Chr(34) & strFile.DateLastAccessed & Chr(34) Next End Sub I am currently using the command-line to run it: c:\test> cscript //nologo Test.vbs "c:\" > "C:\test\Output.csv" The script is not working. I don't know why.

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • PHP resized image functions and S3 upload functions - but how to merge the two?

    - by chocolatecoco
    I am using S3 to store images and I am resizing and compressing images before it gets uploaded using PHP. I'm using this class for storing the images to an S3 bucket - http://undesigned.org.za/2007/10/22/amazon-s3-php-class This all works fine if I'm not doing any file processing before the file is uploaded because it reads the file upload from the $_FILES array. The problem is I am resizing and compressing the image before storing to the S3 bucket. So I'm no longer able to read from the $_FILES array. The functions for resizing: public function resizeImage($newWidth, $newHeight, $option="auto") { // *** Get optimal width and height - based on $option $optionArray = $this->getDimensions($newWidth, $newHeight, $option); $optimalWidth = $optionArray['optimalWidth']; $optimalHeight = $optionArray['optimalHeight']; // *** Resample - create image canvas of x, y size $this->imageResized = imagecreatetruecolor($optimalWidth, $optimalHeight); imagecopyresampled($this->imageResized, $this->image, 0, 0, 0, 0, $optimalWidth, $optimalHeight, $this->width, $this->height); // *** if option is 'crop', then crop too if ($option == 'crop') { $this->crop($optimalWidth, $optimalHeight, $newWidth, $newHeight); } } The script I am using to store the file after resizing and compressing to a local directory: public function saveImage($savePath, $imageQuality="100") { // *** Get extension $extension = strrchr($savePath, '.'); $extension = strtolower($extension); switch($extension) { case '.jpg': case '.jpeg': if (imagetypes() & IMG_JPG) { imagejpeg($this->imageResized, $savePath, $imageQuality); } break; case '.gif': if (imagetypes() & IMG_GIF) { imagegif($this->imageResized, $savePath); } break; case '.png': // *** Scale quality from 0-100 to 0-9 $scaleQuality = round(($imageQuality/100) * 9); // *** Invert quality setting as 0 is best, not 9 $invertScaleQuality = 9 - $scaleQuality; if (imagetypes() & IMG_PNG) { imagepng($this->imageResized, $savePath, $invertScaleQuality); } break; // ... etc default: // *** No extension - No save. break; } imagedestroy($this->imageResized); } with this PHP code to invoke it: $resizeObj = new resize("$images_dir/$filename"); $resizeObj -> resizeImage($thumbnail_width, $thumbnail_height, 'crop'); $resizeObj -> saveImage($images_dir."/tb_".$filename, 90); How do I modify the code above so I can pass it through this function: $s3->putObjectFile($thefile, "s3bucket", $s3directory, S3::ACL_PUBLIC_READ)

    Read the article

< Previous Page | 297 298 299 300 301 302 303 304 305 306 307 308  | Next Page >