Search Results

Search found 9822 results on 393 pages for 'hottest articles'.

Page 373/393 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Why does PEAR get installed to my user directory?

    - by webworm
    I am new to Linux and I am attempting to install the PHP PEAR library on a virtual server which is running Ubuntu. I am following a tutorial that covers installing PEAR but have run up against an area where I am confused. When running the PEAR installation program I am prompted as to what I want the INSTALL_PREFIX to be. Evidently the INSTALL_PREFIX, among other things, determines where PEAR will be installed. The tutorial suggest the value of INSTALL_PREFIX be the following path ... "/home/MY_USER_NAME/pear" where MY_USER_NAME = my user account Having come from a Windows world, applications are installed on the system where everyone can use them. If I install PEAR underneath my user directory will other developers on the system be able to make use of PEAR in their PHP scripts? I want to make PEAR available to all users and not just myself. Could someone explain to me the difference between installing for all users and installing just for myself? Does the install location matter? Should I be installing PEAR in a different location? Thanks for any suggestions. P.S. The tutorial I am following is located at the following URL ... http://articles.sitepoint.com/article/getting-started-with-pear/2

    Read the article

  • How to use interfaces in exception handling

    - by vikp
    Hi, I'm working on the exception handling layer for my application. I have read few articles on interfaces and generics. I have used inheritance before quite a lot and I'm comfortable with in that area. I have a very brief design that I'm going to implement: public interface IMyExceptionLogger { public void LogException(); // Helper methods for writing into files,db, xml } I'm slightly confused what I should be doing next. public class FooClass: IMyExceptionLogger { // Fields // Constructors } Should I implement LogException() method within FooClass? If yes, than I'm struggling to see how I'm better of using an interface instead of the concrete class... I have a variety of classes that will make a use of that interface, but I don't want to write an implementation of that interface within each class. In the same time If I implement an interface in one class, and then use that class in different layers of the application I will be still using concrete classes instead of interfaces, which is a bad OO design... I hope this makes sense. Any feedback and suggestions are welcome. Please notice that I'm not interested in using net4log or its competitors because I'm doing this to learn. Thank you Edit: Wrote some more code. So I will implement variety of loggers with this interface, i.e. DBExceptionLogger, CSVExceptionLogger, XMLExceptionLogger etc. Than I will still end up with concrete classes that I will have to use in different layers of my application.

    Read the article

  • print friendly version of floated list

    - by Brad
    I have a list of phone extensions that I want to print a friendly version of it. I have a print css for it to print appropriately onto paper, the extensions are located within an unordered list, which are floated to the left. <ul> <li>Larry Hughes <span class="ext">8291</span></li> <li>Chuck Davis <span class="ext">3141</span></li> <li>Kevin Skillis <span class="ext">5115</span></li> </ul> I float it left, and when it prints the second page, it leaves off the name part of the list (in Firefox, works fine in Google Chrome and IE), see here: http://cl.ly/de965aea63f66c13ba32 I am referring to this: http://www.alistapart.com/articles/goingtoprint/ - they mentioned something about applying a float:none; to the content part of the page. If I do that, how should I go about making the list show up in 4 columns? It is a dynamic list, pulled from a database. Any help is appreciated.

    Read the article

  • Cakephp, Route old google search results to new home page

    - by ion
    Hi there, I have created a new website for a company and I would like all the previous search engine results to be redirected. Since there were quite a few pages and most of them where using an id I would like to use something generic instead of re-routing all the old pages. My first thought was to do that: Router::connect('/*', array('controller' => 'pages', 'action' => 'display', 'home')); And put that at the very end of the routes.php file [since it is prioritized] so that all requests not validating with previous route actions would return true with this one and redirect to homepage. However this does not work. I'm pasting my routes.php file [since it is small] hoping that someone could give me a hint: Router::connect('/', array('controller' => 'pages', 'action' => 'display', 'home')); Router::connect('/company/*', array('controller' => 'articles', 'action' => 'view')); Router::connect('/contact/*', array('controller' => 'contacts', 'action' => 'view')); Router::connect('/lang/*', array('controller' => 'p28n', 'action' => 'change')); Router::connect('/eng/*', array('controller' => 'p28n', 'action' => 'shuntRequest', 'lang' => 'eng')); Router::connect('/gre/*', array('controller' => 'p28n', 'action' => 'shuntRequest', 'lang' => 'gre')); Router::parseExtensions('xml');

    Read the article

  • C++ struct, public data members and inheritance

    - by Marius
    Is it ok to have public data members in a C++ class/struct in certain particular situations? How would that go along with inheritance? I've read opinions on the matter, some stated already here http://stackoverflow.com/questions/952907/practices-on-when-to-implement-accessors-on-private-member-variables-rather-than http://stackoverflow.com/questions/670958/accessors-vs-public-members or in books/articles (Stroustrup, Meyers) but I'm still a little bit in the shade. I have some configuration blocks that I read from a file (integers, bools, floats) and I need to place them into a structure for later use. I don't want to expose these externally just use them inside another class (I actually do want to pass these config parameters to another class but don't want to expose them through a public API). The fact is that I have many such config parameters (15 or so) and writing getters and setters seems an unnecessary overhead. Also I have more than one configuration block and these are sharing some of the parameters. Making a struct with all the data members public and then subclassing does not feel right. What's the best way to tackle that situation? Does making a big struct to cover all parameters provide an acceptable compromise (I would have to leave some of these set to their default values for blocks that do not use them)?

    Read the article

  • The "correct" way of using multilingual support

    - by Felipe Athayde
    I just began working with ASP.NET and I'm trying to bring with me some coding standards I find healthy. Among such standards there is the multilingual support and the use of resources for easily handling future changes. Back when I used to code desktop applications, every text had to be translated, so it was a common practice to have the language files for every languages I would want to offer to the customers. In those files I would map every single text, from button labels to error messages. In ASP.NET, with the help of Visual Studio, I have the resort of using the IDE to generate such Resource Files (from Tools - Generate Local Resource), but then I would have to fill my webpages with labels - at least that is what I've learned from articles and tutorials. However, such approach looks a bit odd and I'm tempted to guess it doesn't smell that good as well. Now to the question: 1) Should I keep every single text in my website as labels and manage its contents in the resource files? It looks/feels odd specially when considering a text with several paragraphs. 2) Whenever I add/remove something, e.g.: a button, to an aspx file I would have to add it to the resource file as well, because generating the resource file again would simply override all my previous changes to it. That doesn't feel like a reusable code at all for me. Any comment suggestion on this one? Perhaps I got it all wrong from tutorials as it doesn't seem like a standardized matter - specially if it required recompiling the entire application whenever some change has to be done.

    Read the article

  • iFrame in a Content Editor Web Part

    - by Music Magi
    Hi - I'm creating a page with two web parts: one with a search UI and another that displays the results. I'm using Content Editor Web Parts for both, but no matter how hard I try I can't seem to display more than a fraction of the results page in the content editor web part with the iFrame. The width seems to set fine, but no matter how I set the height (I've tried in-line, using CSS and using Javascript/JQuery), I cannot change the height of what is displayed. If I make the height something ridiculous like 3000px, I can see the empty space below, but it still only shows a small amount of the resulting page at the top of this iframe section. I've observed the HTML and the iFrame takes up all the space it's supposed to, but only shows that small sliver of the actual page the iFrame is displaying. I've tried numerous approaches from numerous articles but have come up short (pun intended). Does anyone have any experience with this or have indication as to why it would display this way or what I have to do to make this iFrame fill up the entire web part space? Thanks in advance

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • Minimizing server load in case of many XMLHttpRequest calls

    - by user1888975
    I am making a website where users are to like some articles. Whenever the like button is clicked I am sending a XMLHttpRequest to the server to run a file called like_clicked.php along with the get data of article id and user id. This file takes article id and user id and updates the sql database and also adds a node in an xml file related to the user. This is the first time I am doing something for mass usage. I am worried about the server load when too many users call the like_clicked.php file. Please help me, if this method is ok. I am also thinking of an alternative in case the above method fails. I am thinking of making many like_clicked files (namely like_clicked1.php, like_clicked2.php ... ) to minimize load on a single i-node. Is there a method to detect that it is better to call the next like_clicked file. Here we would need to detect how many calls per unit time are coming for the particular file. How do we handle this? Thanks in advance.

    Read the article

  • Domain Authentication from .NET Client over VPN

    - by Holy Christ
    I am writing a ClickOnce WPF app that will sometimes be used over VPN. The app uses resources available only to domain authenticated users. Some of the things include accessing SSRS Reports, accessing LDAP to lookup user information, hitting web services, etc. When a user logs in from a machine that is not authenticated on the domain, I need to somehow get his credentials, authenticate him on the domain, and store his credentials. What is the recommended approach for authenticating domain users over VPN? How can I securely store the credentials? I've found several articles but, not much posted recently and a lot of the solutions seem kinda hacky, or aren't very secure (ie - storing strings clear text in memory). It would be cool if I could use the ActiveDicrtoryMembershipProvider, but that seems to be geared for use in web apps. EDIT: The above is kind of a workaround. The user must enter their domain credentials to authenticate on the VPN. It would be ideal to access the credentials the user has already entered to login to the VPN instead of the WindowsIdentity.GetCurrent() (which returns the user logged into the computer). Any ideas on how that could work? We use Juniper Networks to connect to the VPN. Thanks!

    Read the article

  • I have data about deadlocks, but I can't understand why they occur

    - by Alex
    I am receiving a lot of deadlocks in my big web application. http://stackoverflow.com/questions/2941233/how-to-automatically-re-run-deadlocked-transaction-asp-net-mvc-sql-server Here I wanted to re-run deadlocked transactions, but I was told to get rid of the deadlocks - it's much better, than trying to catch the deadlocks. So I spent the whole day with SQL Profiler, setting the tracing keys etc. And this is what I got. There's a Users table. I have a very high usable page with the following query (it's not the only query, but it's the one that causes troubles) UPDATE Users SET views = views + 1 WHERE ID IN (SELECT AuthorID FROM Articles WHERE ArticleID = @ArticleID) And then there's the following query in ALL pages: User = DB.Users.SingleOrDefault(u => u.Password == password && u.Name == username); That's where I get User from cookies. Very often a deadlock occurs and this second Linq-to-SQL query is chosen as a victim, so it's not run, and users of my site see an error screen. I read a lot about deadlocks... And I don't understand why this is causing a deadlock. So obviously both of this queries run very often. At least once a second. Maybe even more often (300-400 users online). So they can be run at the same time very easily, but why does it cause a deadlock? Please help. Thank you

    Read the article

  • Symfony 2 - Updating a table based on newly inserted record in another table

    - by W00d5t0ck
    I'm trying to create a small forum application using Symfony 2 and Doctrine 2. My ForumTopic entity has a last_post field (oneToOne mapping). Now when I persist my new post with $em->persist($post); I want to update my ForumTopic entity so its last_post field would reference this new post. I have just realised that it cannot be done with a Doctrine postPersist Listener, so I decided to use a small hack, and tried: $em->persist($post); $em->flush(); $topic->setLastPost($post); $em->persist($post); $em->flush(); but it doesn't seem to update my topics table. I also took a look at http://docs.doctrine-project.org/projects/doctrine-orm/en/2.1/reference/working-with-associations.html#transitive-persistence-cascade-operations hoping it will solve the problem by adding cascade: [ 'persist' ] to my Topic.orm.yml file, but it didn't help, either. Could anyone point me to a solution or an example class? My ForumTopic is: FrontBundle\Entity\ForumTopic: type: entity table: forum_topics id: id: type: integer generator: strategy: AUTO fields: title: type: string(100) nullable: false slug: type: string(100) nullable: false created_at: type: datetime nullable: false updated_at: type: datetime nullable: true update_reason: type: text nullable: true oneToMany: posts: targetEntity: ForumPost mappedBy: topic manyToOne: created_by: targetEntity: User inversedBy: articles nullable: false updated_by: targetEntity: User nullable: true default: null topic_group: targetEntity: ForumTopicGroup inversedBy: topics nullable: false oneToOne: last_post: targetEntity: ForumPost nullable: true default: null cascade: [ persist ] uniqueConstraint: uniqueSlugByGroup: columns: [ topic_group, slug ] And my ForumPost is: FrontBundle\Entity\ForumPost: type: entity table: forum_posts id: id: type: integer generator: strategy: AUTO fields: created_at: type: datetime nullable: false updated_at: type: datetime nullable: true update_reason: type: string nullable: true text: type: text nullable: false manyToOne: created_by: targetEntity: User inversedBy: forum_posts nullable: false updated_by: targetEntity: User nullable: true default: null topic: targetEntity: ForumTopic inversedBy: posts

    Read the article

  • is it better to query database or grab from file? php & mysql

    - by pfunc
    I am keeping a large amount of words in a database that I want to match up articles to. I was thinking that it would just be better to keep these words in an array and grab that array whenever needed instead of querying the database every time (since the words won't be changing that much). Is there much performance difference in doing this? And if I were to do this, how to I write a script that writes the array to a a new php file. I tried writing the array like so: while( $row = mysql_fetch_assoc($query)) { $newArray[] = $row; } $fp = fopen('noWordsArr.php', 'w'); fwrite($fp, $newArray); fclose($fp); But all I get in the other file is "Array". So i figured I could write this and then write have a chron hit up the file every few days or so in case things have changed. But I guess if there is no performance advantage then it prob won't be necessary and I can just query the database every time I need to access the words.

    Read the article

  • Why do I have to specify pure virtual functions in the declaration of a derived class in Visual C++?

    - by neuviemeporte
    Given the base class A and the derived class B: class A { public: virtual void f() = 0; }; class B : public A { public: void g(); }; void B::g() { cout << "Yay!"; } void B::f() { cout << "Argh!"; } I get errors saying that f() is not declared in B while trying do define void B::f(). Do I have to declare f() explicitly in B? I think that if the interface changes I shouldn't have to correct the declarations in every single class deriving from it. Is there no way for B to get all the virtual functions' declarations from A automatically? EDIT: I found an article that says the inheritance of pure virtual functions is dependent on the compiler: http://www.objectmentor.com/resources/articles/abcpvf.pdf I'm using VC++2008, wonder if there's an option for this.

    Read the article

  • How do I put my return data from an asmx into JSON?

    - by jphenow
    I want to return an array of javascript objects from my asp.net asmx file. ie. variable = [ { *value1*: 'value1', *value2*: 'value2', ..., }, { . . } ]; I seem have been having trouble reaching this. I'd put this into code but I've been hacking away at it so much it'd probably do more harm than good in having this answered. Basically I am using a web service to find names as people type the name. I'd use a regular text file or something but its a huge database that's always changing - and don't worry I've indexed the names so searching can be a little snappier - but I would really prefer to stick with this method and just figure out how to get usable JSON back to javascript. I've seen a few that sort of attempt to describe how one would approach this but I honestly think microsofts articles are damn near unreadable. Thanks in advance for assistance. EDIT: I'm using the $.ajax() function from jQuery - I've had it working but it seems like I was doing it in bad practice not returning and using actual JSON. Previously I'd take a string back and insert it into html to use the variable it set - very roundabout.

    Read the article

  • jQuery toggle div and text

    - by Jack
    I would like some help with this code, I have read similar questions & answers here on stackoverflow but I still can't get it right. I have a link that populated by .html("") to read more… when clicked it opens the hidden div and at the end the .html("") reads less. When I click close the div slides closed but the text still reads less… I have read many articles on how to do this but am confused and can't get past this last hurdle, any help would be much appreciated. // this is the intro div Ligula suspendisse nulla pretium, rhoncus tempor placerat fermentum, enim integer ad vestibulum volutpat. Nisl rhoncus turpis est, vel elit, congue wisi enim nunc ultricies sit, magna tincidunt. Maecenas aliquam maecenas ligula nostra, accumsan taciti. // this next div is hidden Ligula suspendisse nulla pretium, rhoncus tempor placerat fermentum, enim integer ad vestibulum volutpat. Nisl rhoncus turpis est, vel elit, congue wisi enim nunc ultricies sit, magna tincidunt. Maecenas aliquam maecenas ligula nostra, accumsan taciti. $(document).ready(function() { $(".overview-continued").hide(); $(".show-overview").html("more..."); $(".show-overview").click(function() { $(".overview-continued").slideToggle("1000"); $(".show-overview").html("less..."); return false; }); });

    Read the article

  • session timeout prompt asp.net

    - by renathy
    The application I am using is implementing some session timeout prompt using jquery. There is a timer that counts and if there is no user activity after predefined X minutes it shows user prompt (Your session will end soon... Continue or Logout). It uses the approach found here - http://www.codeproject.com/Articles/227382/Alert-Session-Time-out-in-ASP-Net. However, this doesn't work if user opens new tab: 1) User logs in, timer starts counting user inactivity's. 2) User clicks some link that opens in new window (for example, in our case it is a long report running). Second tab is active, there is some response (crossbacks / postbacks that doesn't end session). 3) Second browser tab is active, there is some activity that doesn't end session. 4) However, first browser tab is inactive and counter is "thinking" that session should be closed, it displays appropriate message and then logout user. This is not what we want. So the given approach is just some session timeout fix, but if user is active in another tab, then application will logout user anyway. That is not the desired thing. We have a Report Page. It functions so that it opens report in a new tab/window. And it could be run quite long. Report section take care of some callbacks, so session wont end in this tab. However, it would end in the second tab.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • Accomodating Multiple DLL Versions

    - by shadeseeker
    I have an application that uses a Microsoft DLL (Microsoft.ComponentStudio.ComponentPlatformImplementation.dll) which is used for OS deployment and accessing the catalog files. Version 6.0.0.0 is specific to the Windows Server 2008 catalog files. The newer version 6.1.0.0 is specific to Windows Server 2008 R2 catalog files. Attempting to access a catalog file with the incorrect version results in an exception. My application (VB.NET using VS2005) needs to be able to access either version of these catalogs - I'd be happy with two executables (one for each catalog version) but obviously I don't want to maintain two sets of source code for each. Specifying both sets of DLLs in the project reference is not possible as the DLL names are identical. I'd rather not have to manually add and remove the DLL references each time I want to a build. As far as I know the interfaces etc are effectively identical between the two. I've read a few articles here and elsewhere about bindingRedirect, Assembly.Load etc but none seem to be bearing fruit. Any guidance on the best path to follow would be greatly appreciated. Thanks.

    Read the article

  • Revision Control System Recommendations

    - by Jordan Arsenault
    I've reached a point in my independent development work where I would like to start using Subversion techniques. Up to now, I've been simply making backups by exporting my current database, and zipping them together with my PHP project files. I've read some articles online and watched a video with Linus Torvalds - the general verdict seems to be that Git is in and old CVS techniques are out. I'm not currently operating under Linux, I do all PHP work out of Windows - Eclipse. Due to the fact that Eclipse runs on JVM, jumping into Linux - Eclipse will be more or less transparent - file system aside. What I would like to accomplish is being able to keep a constant revision history - But I want this to be almost entirely transparent. Also, I work in an MVC framework, and I would like to be able to release my views to Designers, and have them work from within the revision control system too. Will Egit accomplish what I need? Or is it too much overhead for a one-man workforce? What do you recommend I use so that I can keep a revision history? I also require the service to be free! Thank you all - StackOverflow ftw!

    Read the article

  • Python NameError when attempting to use a user-defined class

    - by Michael Herold
    I'm getting a weird instance of a NameError when attempting to use a class I wrote. In a directory, I have the following file structure: dir/ ReutersParser.py test.py reut-xxx.sgm Where my custom class is defined in ReutersParser.py and I have a test script defined in test.py. The ReutersParser looks something like this: from sgmllib import SGMLParser class ReutersParser(SGMLParser): def __init__(self, verbose=0): SGMLParser.__init__(self, verbose) ... rest of parser if __name__ == '__main__': f = open('reut2-short.sgm') s = f.read() p = ReutersParser() p.parse(s) It's a parser to deal with SGML files of Reuters articles. The test works perfectly. Anyway, I'm going to use it in test.py, which looks like this: from ReutersParser import ReutersParser def main(): parser = ReutersParser() if __name__ == '__main__': main() When it gets to that parser line, I'm getting this error: Traceback (most recent call last): File "D:\Projects\Reuters\test.py", line 34, in <module> main() File "D:\Projects\Reuters\test.py", line 19, in main parser = ReutersParser() File "D:\Projects\Reuters\ReutersParser.py", line 38, in __init__ SGMLParser.__init__(self, verbose) NameError: global name 'sgmllib' is not defined For some reason, when I try to use my ReutersParser in test.py, it throws an error that says it cannot find sgmllib, which is a built-in module. I'm at my wits' end trying to figure out why the import won't work. What's causing this NameError? I've tried importing sgmllib in my test.py and that works, so I don't understand why it can't find it when trying to run the constructor for my ReutersParser.

    Read the article

  • visual analysis of web pages in ruby

    - by Clint Miller
    I'm looking to write some code that does visual analysis of web pages, preferably using Ruby. My code will need to be able to determine the top, left, width, height, background color, color, and font size for all the elements in the DOM. Of course, these values can only be calculated once all CSS is applied. So, I don't think that Nokogiri is up for the job. Ultimately, I'm trying to use this data in a VIPS-like (Vision-Based Page Segmentation) algorithm in an attempt to find the main content in downloaded news articles. I've considered using Watir to drive Chrome or Firefox and then extract the data. The problem is that browsers can't be run headless through Watir (I think). Ultimately, this code will be running on an array of Linux servers in a data center. So, the code won't have easy access to an X Server for displaying the browser. I suppose one solution is to use Watir and run a headless X Server on the Linux servers. That's a bit of a pain, but it looks like my best option right now. Does anyone have any better ideas?

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >