Search Results

Search found 11840 results on 474 pages for 'assembly context'.

Page 311/474 | < Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >

  • Error Using 32 vs. 64 bit SharePoint 2007 DLLs with PowerShell

    - by Brian Jackett
    Next time you fire up PowerShell to work with the SharePoint API make sure you launch the proper bit version of PowerShell.  Last week I had an interesting error that led to this blog post.  Travel back in time a little bit with me to see where this 32 vs. 64 bit debate started. History     Ever since the first pre-beta bits of Office 2010 landed in my lap I have been questioning whether it’s better to run 32 or 64 bit applications on a 64 bit host operating system.  In relation to Office 2010 I heard a number of arguments for 32 bit including this link from the Office 2010 Engineering team.  Given my typical usage scenarios 32 bit seemed the way to go since I wasn’t a “super RAM hungry” Excel user or the like. The Problem     Since I had chosen 32 bit Office 2010, I tried to stick with 32 bit version of other programs that I run assuming the same benefits and rules applied to other applications.  This is where I was wrong.  Last week I was attempting to use 32 bit PowerShell ISE (Integrated Scripting Environment) on a 64 bit WSS 3.0 server.  When trying to reference the 64 bit SharePoint DLLs I got the following errors about not being able to find the web application.     I have run into these errors when I have hosts file issues or improper permissions to the farm / site collection but these were not the case.  After taking a quick spin around the interwebs I ran across the below forum post comment and another MSDN forum reply that explained the error.  Turns out that sometimes it’s not possible to run 32 bit applications against a 64 bit OS / farm / assembly / etc. …the problem could also be because your SharePoint is 64-Bit but your app is running in 32-bit mode     I quickly exited 32 bit PowerShell ISE and ran the same code under 64 bit PowerShell ISE.  All errors were gone and the script ran successfully.   Conclusion     The rules of 32 vs. 64 bit interoperability do not always apply evenly across all applications and scenarios.  In my case I wasn’t able to run 32 bit PowerShell against 64 bit SharePoint DLLs.  I’m updating all of my links and shortcuts to use 64 bit PowerShell where appropriate.  I’m quite surprised it has taken me this long to run into this error, but sometimes blind luck is all that keeps you from running into errors.  Lesson learned and hopefully this can benefit you as well.  Happy SharePointing all!         -Frog Out   Links http://blogs.technet.com/b/office2010/archive/2010/02/23/understanding-64-bit-office.aspx http://social.msdn.microsoft.com/Forums/en-US/sharepointdevelopment/thread/a732cb83-c2ef-4133-b04e-86477b72bbe3/ http://stackoverflow.com/questions/266255/filenotfoundexception-with-the-spsite-constructor-whats-the-problem

    Read the article

  • Don Knuth and MMIXAL vs. Chuck Moore and Forth -- Algorithms and Ideal Machines -- was there cross-pollination / influence in their ideas / work?

    - by AKE
    Question: To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms? I'm interested in citations, interviews, articles, links, or any other sort of evidence. It could also be evidence of the form of A and B here suggest that Moore might have borrowed or influenced C and D from Knuth here, or vice versa. (Opinions are of course welcome, but references / links would be better!) Context: Until fairly recently, I have been primarily familiar with Knuth's work on algorithms and computing models, mostly through TAOCP but also through his interviews and other writings. However, the more I have been using Forth, the more I am struck by both the power of a stack-based machine model, and the way in which the spareness of the model makes fundamental algorithmic improvements more readily apparent. A lot of what Knuth has done in fundamental analysis of algorithms has, it seems to me, a very similar flavour, and I can easily imagine that in a parallel universe, Knuth might perhaps have chosen Forth as his computing model. That's the software / algorithms / programming side of things. When it comes to "ideal computing machines", Knuth in the 70s came up with the MIX computer model, and then, collaborating with designers of state-of-the-art RISC chips through the 90s, updated this with the modern MMIX model and its attendant assembly language MMIXAL. Meanwhile, Moore, having been using and refining Forth as a language, but using it on top of whatever processor happened to be in the computer he was programming, began to imagine a world in which the efficiency and value of stack-based programming were reflected in hardware. So he went on in the 80s to develop his own stack-based hardware chips, defining the term MISC (Minimal Instruction Set Computers) along the way, and ending up eventually with the first Forth chip, the MuP21. Both are brilliant men with keen insight into the art of programming and algorithms, and both work at the intersection between algorithms, programs, and bare metal hardware (i.e. hardware without the clutter of operating systems). Which leads me to the headlined question... Question:To what extent is it known (or believed) that Chuck Moore and Don Knuth had influence on each other's thoughts on ideal machines, or their work on algorithms?

    Read the article

  • Viewing at Impossible Angles

    - by kemer
    The picture of the little screwdriver with the Allen wrench head to the right is bound to invoke a little nostalgia for those readers who were Sun customers in the late 80s. This tool was a very popular give-away: it was essential for installing and removing Multibus (you youngsters will have to look that up on Wikipedia…) cards in our systems. Back then our mid-sized systems were gargantuan: it was routine for us to schlep around a 200 lb. desk side box and 90 lb. monitor to demo a piece of software your smart phone will run better today. We were very close to the hardware, and the first thing a new field sales systems engineer had to learn was how put together a system. If you were lucky, a grizzled service engineer might run you through the process once, then threaten your health and existence should you ever screw it up so that he had to fix it. Nowadays we make it much easier to learn the ins and outs of our hardware with simulations–3D animations–that take you through the process of putting together or replacing pieces of a system. Most recently, we have posted three sophisticated PDFs that take advantage of Acrobat 9 features to provide a really intelligent approach to documenting hardware installation and repair: Sun Fire X4800/X4800 M2 Animations for Chassis Components Sun Fire X4800/X4800 M2 Animations for Sub Assembly Module (SAM) Sun Fire X4800/X4800 M2 Animations for CMOD Download one of these documents and take a close look at it. You can view the hardware from any angle, including impossible ones. Each document has a number of procedures, that break down into steps. Click on a procedure, then a step and you will see it animated in the drawing. Of course hardware design has generally eliminated the need for things like our old giveaway tools: components snap and lock in. Often you can replace redundant units while the system is hot, but for heaven’s sake, you’ll want to verify that you can do that before you try it! Meanwhile, we can all look forward to a growing portfolio of these intelligent documents. We would love to hear what you think about them. –Kemer

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Schema Generation with Input Parameters

    - by Stuart Brierley
    As previsouly noted in my post on Schema Generation using the Community ODBC Adapter, I ran into a problem when trying to generate a schema to represent a MySQL stored procedure that had input parameters.  After a bit of investigation and a few deadends I managed to figure out a way around this issue - detailed below are both the problem and solution in case you ever run into this yourself. The Problem Imagine a stored procedure that is coded as follows in MySQL: StuTest(in DStr varchar(80)) BEGIN   Declare GRNID int;   Select grn_id into GRNID from grn_header where distribution_number = DStr;   Select GRNID; END This is quite a simple stored procedure but can be used to illustrate the issue with parameters quite niceley. When generating the schema using the Add Generated Items wizard, I tried selecting "Stored Procedure" and then in the Statement Information window typing the stored procedure name: StuTest Pressing generate then gives the following error: "Incorrect Number of arguments for Procedure StuTest; expected 1, got 0" If you attempt to supply a value for the parameter you end up with a schema that will only ever supply the parameter value that you specify.  For example supplying StuTest('123') will always call the procedure with a parameter value of 123. The Solution   I tried contacting Two Connect about this, but their experience of testing the adapter with MySQL was limited. After looking through the code for the ODBC adapter myself and trying a few things out, I was eventually able to use the ODBC adapter to call a test stored procedure using a two way send port. In the generate schema wizard instead of selecting Stored Procedure I had to choose SQL Script instead, detailing the following script: Call StuTest(@InputParameter) By default this would create a request schema with an attribute called InputParameter, with a SQL type of NVarChar(1).  In most cases this is not going to be correct for the stored procedure being called. To change the type from the default that is applied you need to select the "Override default query processing" check box when specifying the script in the wizard.  This then opens the BizTalk ODBC Override window which lets you change the properties of the parameters and also test out the query script.  Once I had done this I was then able to generate the correct schema, which included an attribute representing the parameter.  By deploying the schema assembly I was then able to try the ODBC adapter out on a two way send port. When supplied with an appropriate message instance (for the generated request schema) this send port successfully returned the expected response.

    Read the article

  • What type of interview questions should you ask for "legacy" programmers?

    - by Marcus Swope
    We have recently been receiving lots of applicants for our open developer positions from people who I like to refer to as "legacy" programmers. I don't like the term "old" because it seems a little prejudiced (especially to HR!) and it doesn't accurately reflect what I mean. We are a company that does primarily .NET development using TDD in an Agile environment, we use Git as a source control system, we make heavy use of OSS tools and projects and we contribute to them as well, we have a strong bias towards adhering to strong Object-Oriented principles, SOLID, etc, etc, etc... Now, the normal list of questions that we ask doesn't really seem to apply to applicants that are fresh out of school, nor does it seem to apply to these "legacy" programmers. Here is how I (loosely) define a "legacy" programmer. Spent a significant amount of their career working almost exclusively with Assembly/Machine Languages. Primary accomplishments include work done with TANDEM systems. Has extensive experience with technologies like FoxPro and ColdFusion It's not that we somehow think that what we do is "better" than what they do, on the contrary, we respect these types of applicants and we are scared that we may be missing a good candidate. It is just very difficult to get a good read on someone who is essentially speaking a different language than you. To someone like this, it seems a little strange to ask a question like: What is the difference between an abstract class and an interface? Because, I would think that they would almost never know the answer or even what I'm talking about. However, I don't want to eliminate someone who could be a very good candidate in their own right and could be able to eventually learn the stuff that we do. But, I also don't want to just ask a bunch of behavioral questions, because I want to know about their technical background as well. Am I being too naive? Should "legacy" programmers like this already know about things like TDD, source control strategies, and best practices for object-oriented programming? If not, what questions should we ask to get a good representation about whether or not they are still able to learn them and be able to keep up in our fast-paced environment? EDIT: I'm not concerned with whether or not applicants that meet these criteria are in general capable or incapable, as I have already stated that I believe that they can be 100% capable. I am more interested in figuring out how to evaluate their talents, as I am having a hard time figuring out how to determine if they are an A+ "legacy" programmer or if they are a D- "legacy" programmer. I've worked with both.

    Read the article

  • Connect to QuickBooks from PowerBuilder using RSSBus ADO.NET Data Provider

    - by dataintegration
    The RSSBus ADO.NET providers are easy-to-use, standards based controls that can be used from any platform or development technology that supports Microsoft .NET, including Sybase PowerBuilder. In this article we show how to use the RSSBus ADO.NET Provider for QuickBooks in PowerBuilder. A similar approach can be used from PowerBuilder with other RSSBus ADO.NET Data Providers to access data from Salesforce, SharePoint, Dynamics CRM, Google, OData, etc. In this article we will show how to create a basic PowerBuilder application that performs CRUD operations using the RSSBus ADO.NET Provider for QuickBooks. Step 1: Open PowerBuilder and create a new WPF Window Application solution. Step 2: Add all the Visual Controls needed for the connection properties. Step 3: Add the DataGrid control from the .NET controls. Step 4:Configure the columns of the DataGrid control as shown below. The column bindings will depend on the table. <DataGrid AutoGenerateColumns="False" Margin="13,249,12,14" Name="datagrid1" TabIndex="70" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn x:Name="idColumn" Binding="{Binding Path=ID}" Header="ID" Width="SizeToHeader" /> <DataGridTextColumn x:Name="nameColumn" Binding="{Binding Path=Name}" Header="Name" Width="SizeToHeader" /> ... </DataGrid.Columns> </DataGrid> Step 5:Add a reference to the RSSBus ADO.NET Provider for QuickBooks assembly. Step 6:Optional: Set the QBXML Version to 6. Some of the tables in QuickBooks require a later version of QuickBooks to support updates and deletes. Please check the help for details. Connect the DataGrid: Once the visual elements have been configured, developers can use standard ADO.NET objects like Connection, Command, and DataAdapter to populate a DataTable with the results of a SQL query: System.Data.RSSBus.QuickBooks.QuickBooksConnection conn conn = create System.Data.RSSBus.QuickBooks.QuickBooksConnection(connectionString) System.Data.RSSBus.QuickBooks.QuickBooksCommand comm comm = create System.Data.RSSBus.QuickBooks.QuickBooksCommand(command, conn) System.Data.DataTable table table = create System.Data.DataTable System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter dataAdapter dataAdapter = create System.Data.RSSBus.QuickBooks.QuickBooksDataAdapter(comm) dataAdapter.Fill(table) datagrid1.ItemsSource=table.DefaultView The code above can be used to bind data from any query (set this in command), to the DataGrid. The DataGrid should have the same columns as those returned from the SELECT statement. PowerBuilder Sample Project The included sample project includes the steps outlined in this article. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

    Read the article

  • Stagnating in programming

    - by Coder
    Time after time this question came up in my mind, but up until today I wasn't thinking about it much. I have been programming for maybe around 8 years now, and for the last two years it seems I'm not as keen to pick up new technologies anymore. Maybe that's a burnout or something, but I'd say it's experience and what I like, that's stopping me from running after the latest and greatest. I'm C++ developer, by this I mean, I love close to metal programming. I have no problems tracing problems through assembly, using tools like WinDbg or HexView. When I use constructs, I think about how they are realized underneath, how the bits are set and unset under the hood. I love battling with complex threading problems and doing everything hardcore way, even by hand if the regular solutions seem half baked. But I also love the C++0x stuff, and use it a lot. And all C++ code as long as it's not cumbersome compared to C counterparts, sometimes I also fall back to sort of "Super C" if the C++ way is ugly. And then there are all other developers who seem to be way more forward looking, .Net 4.0 MVC, WPF, all those Microsoft X#s, LINQ languages, XML and XSLT, mobile devices and so on. I have done a considerable amount of .NET, SQL, ASPX programming, but the further I go, the less I want to try those technologies. Is that bad? Almost every day I hear people saying that managed code is the only way forward, WPF is the way to go. I hear that C++ is godawful, and you can't code anything in it that's somewhat stable. But I don't buy it. With the experience I have, and the knowledge of how native code is compiled and executes, I can say I find it extremely rare that C++ code is unstable, or leaks, or causes crashes that takes more than 30 seconds to identify and fix. And to tell the truth, I've seen enough problems with other "cool" languages that I'd say C++ is even more stable and production proof than the safe languages, at least for me. The only thing that scares me in C++ is new frameworks, I don't trust them, and I use them extra sparingly. STL - yes, ATL - very sparingly, everything else... Well, not very keen on it. Most huge problems I've ran into, all were related to frameworks, not the language itself. Some overrided operator here, bad hierarchy there, poor class design here, mystical castings there. Other than that, C/C++ (yes, I use them together) still seems a very controlled and stable way to develop applications. Am I stagnating? Should I switch a profession, or force myself in all that marketing hype? Are there more developers who feel the same way?

    Read the article

  • Separating a "wad of stuff" utility project into individual components with "optional" dependencies

    - by romkyns
    Over the years of using C#/.NET for a bunch of in-house projects, we've had one library grow organically into one huge wad of stuff. It's called "Util", and I'm sure many of you have seen one of these beasts in your careers. Many parts of this library are very much standalone, and could be split up into separate projects (which we'd like to open-source). But there is one major problem that needs to be solved before these can be released as separate libraries. Basically, there are lots and lots of cases of what I might call "optional dependencies" between these libraries. To explain this better, consider some of the modules that are good candidates to become stand-alone libraries. CommandLineParser is for parsing command lines. XmlClassify is for serializing classes to XML. PostBuildCheck performs checks on the compiled assembly and reports a compilation error if they fail. ConsoleColoredString is a library for colored string literals. Lingo is for translating user interfaces. Each of those libraries can be used completely stand-alone, but if they are used together then there are useful extra features to be had. For example, both CommandLineParser and XmlClassify expose post-build checking functionality, which requires PostBuildCheck. Similarly, the CommandLineParser allows option documentation to be provided using the colored string literals, requiring ConsoleColoredString, and it supports translatable documentation via Lingo. So the key distinction is that these are optional features. One can use a command line parser with plain, uncolored strings, without translating the documentation or performing any post-build checks. Or one could make the documentation translatable but still uncolored. Or both colored and translatable. Etc. Looking through this "Util" library, I see that almost all potentially separable libraries have such optional features that tie them to other libraries. If I were to actually require those libraries as dependencies then this wad of stuff isn't really untangled at all: you'd still basically require all the libraries if you want to use just one. Are there any established approaches to managing such optional dependencies in .NET?

    Read the article

  • Using machine learning to aim mirrors in a solar array?

    - by Buttons840
    I've been thinking about solar collectors where several independent mirrors to focus the light on a solar collector, similar to the following design from Energy Innovations. Because there will be flaws in the assembly of this solar array, I am proceeding with the following assumptions (or lack thereof): The software knows the "position" of each mirror, but doesn't know how this position relates to the real world or to other mirrors. This will account for poor mirror calibration or other environmental factors which may effect one mirror but not the others. If a mirror moves 10 units in one direction, and then 10 units in the opposite direction, it will end up where it originally started. I would like to use machine learning to position the mirrors correctly and focus the light on the collector. I expect I would approach this as an optimization problem, optimizing the mirror positions to maximize the heat inside the collector and the power output. The problem is finding a small target in a noisy high-dimensional space (considering each mirror has 2 axis of rotation). Some of the problems I anticipate are: cloudy days, even if you stumble upon the perfect mirror alignment, it might be cloudy at the time noisy sensor data the sun is a moving target, it moves along a path, and follows a different path every day - although you could calculate the exact position of the sun at any time, you wouldn't know how that position relates to your mirrors My question isn't about the solar array, but possible machine learning techniques that would help in this "small target in a noisy high dimensional-space" problem. I mentioned the solar array because it was the catalyst for this question and a good example. What machine learning techniques can find such a small target in a noisy high-dimensional space? EDIT: A few additional thoughts: Yes, you can calculate the suns position in the real world, but you don't know how the mirrors position is related to the real world (unless you've learned it somehow). You might know the suns azimuth is 220 degrees, and the suns elevation is 60 degrees, and you might know a mirror is at position (-20, 42); now tell me, is that mirror correctly aligned with the sun? You don't know. Lets assume you have some very sophisticated heat measurements, and you know "with this heat level, there must be 2 mirrors correctly aligned". Now the question is, which two mirrors (out of 25 or more) are correctly aligned? One solution I considered was to approximate the correct "alignment function" using a neural network which would take the suns azimuth and elevation as input and output a large array with 2 values for each mirror which correspond to the 2 axis of each mirror. I'm not sure what the best training method is though.

    Read the article

  • ApiChange Corporate Edition

    - by Alois Kraus
    In my inital announcement I could only cover a small subset what ApiChange can do for you. Lets look at how ApiChange can help you to fix bugs due to wrong usage of an Api within a fraction of time than it would take normally. It happens that software is tested and some bugs show up. One bug could be …. : We get way too man log messages during our test run. Now you have the task to find the most frequent messages and eliminate the Log calls from the source code. But what about the myriads other log calls? How can we check that the distribution of log calls is nearly equal across all developers? And if not how can we contact the developer to check his code? ApiChange can help you too connect these loose ends. It combines several information silos into one cohesive view. The picture below shows how it is able to fill the gaps. The public version does currently “only” parse the binaries and pdbs to give you for a –whousesmethod query the following colums: If it happens that you have Rational ClearCase (a source control system) in your development shop and an Active Directory in place then ApiChange will try to determine from the source file which was determined from the pdb the last check in user which should be present in your Active Directory. From there it is only a small hop to an LDAP query to your AD domain or the GC (Global Catalog) to get from the user name his Full name Email Phone number Department …. ApiChange will append this additional data all of your query results which contain source files if you add the –fileinfo option. As I said this is currently not enabled by default since the AD domain needs to be configured which are currently only some hard coded values in the SiteConstants.cs source file of ApiChange.Api.dll. Once you got this data you can generate metrics based on source file, developer, assembly, … and add additional data by drag and drop directly into the pivot tables inside Excel. This allows you to e.g. to generate a report which lists the source files with most log calls in descending order along with the developer name and email in the pivot table. Armed with this knowledge you can take meaningful measures e.g. to ask the developer if the huge number of log calls in this source file can be optimized. I am aware that this is a very specific scenario but it is a huge time saver when you are able to fill the missing gaps of information. ApiChange does this in an extensible way. namespace ApiChange.ExternalData {     public interface IFileInformationProvider     {         UserInfo GetInformationFromFile(string fileName);     } } It defines an interface where you can implement your custom information provider to close the gap between source control system and the real person I have to send an email to ask if his code needs a closer inspection.

    Read the article

  • WebAPI and MVC4 and OData

    - by Aligned
    I was looking closer into WebAPI, specificially how to use OData to avoid writing GetCustomerByCustomerId(int id) methods all over the place. I had problems just returning IQueryable<T> as some sites suggested in the WebpAPI (Assembly System.Web.Http.dll, v4.0.0.0).  I think things changed in the release version and the blog posts are still out of date. There is no [Queraable] as the answer to this question suggests. Once I get WebAPI.Odata Nuget package, and added the [Queryable] to the method http://localhost:57146/api/values/?$filter=Id%20eq%201 worked (don’t forget the ‘$’). Now the main question is whether I should do this and how to stop logged in users from sniffing the url and getting data for other users. I John V. Peterson has a post on securing WebAPI with headers and intercepting the call at that point. He had an update to use HttpMessageHandlers instead. I think I’ll use this to force the call to contain some kind of unique code for the user, but I’m still thinking about this. I will not expose this to the public, just to my calls with-in my Forms Authentication areas. Other links: http://robbincremers.me/2012/02/16/building-and-consuming-rest-services-with-asp-net-web-api-and-odata-support/ ~lots of good information John V Peterson example: https://github.com/johnvpetersen/ASPWebAPIExample ~ all data access goes through the WebApi and the web client doesn’t have a connection string ~ There is code library for calling the WebApi from MVC using the HttpClient. It’s a great starting point http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx ~ Beta (9/18/2012) Nuget package to help with what I want to do? ~ has a sample code project with examples http://blogs.msdn.com/b/alexj/archive/2012/08/15/odata-support-in-asp-net-web-api.aspx http://blogs.msdn.com/b/alexj/archive/2012/08/21/web-api-queryable-current-support-and-tentative-roadmap.aspx http://stackoverflow.com/questions/10885868/asp-net-mvc4-rc-web-api-odata-filter-not-working-with-iqueryable JSON, pass the correct format in the header (Accept: application/json). $format=JSON doesn’t appear to be working. Async methods built into WebApi! Look for the GetAsync methods.

    Read the article

  • PowerShell – Show a Notification Balloon

    - by BuckWoody
    In my presentations for PowerShell I sometimes want to start a process (like a backup) that will take some time. I normally pop up a notification “balloon” at the start, and then do the bulk of the work, and then pop up a balloon at the end to let me know it’s done. You can actually try out this little sample (on a test system, of course) without any other code to see what it does. Then just put the other PowerShell commands in the #Do Some Work part. Oh – throw an icon (.ico file) in a c:\temp directory or point that somewhere else. (No, this probably isn’t original. Can’t remember where I saw the original code, but I’ve modified it a bit anyway, so if you’re the original author and this looks slightly familiar, post a comment.) [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $objBalloon = New-Object System.Windows.Forms.NotifyIcon $objBalloon.Icon = "C:\temp\Folder.ico" # You can use the value Info, Warning, Error $objBalloon.BalloonTipIcon = "Info" # Put what you want to say here for the Start of the process $objBalloon.BalloonTipTitle = "Begin Title" $objBalloon.BalloonTipText = "Begin Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) # Do some work # Put what you want to say here for the completion of the process $objBalloon.BalloonTipTitle = "End Title" $objBalloon.BalloonTipText = "End Message" $objBalloon.Visible = $True $objBalloon.ShowBalloonTip(10000) Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can a solo programmer become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • How to become a good team player?

    - by Nick
    I've been programming (obsessively) since I was 12. I am fairly knowledgeable across the spectrum of languages out there, from assembly, to C++, to Javascript, to Haskell, Lisp, and Qi. But all of my projects have been by myself. I got my degree in chemical engineering, not CS or computer engineering, but for the first time this fall I'll be working on a large programming project with other people, and I have no clue how to prepare. I've been using Windows all of my life, but this project is going to be very unix-y, so I purchased a Mac recently in the hopes of familiarizing myself with the environment. I was fortunate to participate in a hackathon with some friends this past year -- both CS majors -- and excitingly enough, we won. But I realized as I worked with them that their workflow was very different from mine. They used Git for version control. I had never used it at the time, but I've since learned all that I can about it. They also used a lot of frameworks and libraries. I had to learn what Rails was pretty much overnight for the hackathon (on the other hand, they didn't know what lexical scoping or closures were). All of our code worked well, but they didn't understand mine, and I didn't understand theirs. I hear references to things that real programmers do on a daily basis -- unit testing, code reviews, but I only have the vaguest sense of what these are. I normally don't have many bugs in my little projects, so I have never needed a bug tracking system or tests for them. And the last thing is that it takes me a long time to understand other people's code. Variable naming conventions (that vary with each new language) are difficult (__mzkwpSomRidicAbbrev), and I find the loose coupling difficult. That's not to say I don't loosely couple things -- I think I'm quite good at it for my own work, but when I download something like the Linux kernel or the Chromium source code to look at it, I spend hours trying to figure out how all of these oddly named directories and files connect. It's a programming sin to reinvent the wheel, but I often find it's just quicker to write up the functionality myself than to spend hours dissecting some library. Obviously, people who do this for a living don't have these problems, and I'll need to get to that point myself. Question: What are some steps that I can take to begin "integrating" with everyone else? Thanks!

    Read the article

  • Download files from a SharePoint site using the RSSBus SSIS Components

    - by dataintegration
    In this article we will show how to use a stored procedure included in the RSSBus SSIS Components for SharePoint to download files from SharePoint. While the article uses the RSSBus SSIS Components for SharePoint, the same process will work for any of our SSIS Components. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task. Step 3: Add an RSSBus SharePoint Source to the Data Flow Task. Step 4: In the RSSBus SharePoint Source, add a new Connection Manager, and add your credentials for the SharePoint site. Step 5: Now from the Table or View dropdown, choose the name of the Document Library that you are going to back up and close the wizard. Step 6: Add a Script Component to the Data Flow Task and drag an output arrow from the 'RSSBus SharePoint Source' to it. Step 7: Open the Script Component, go to edit the Input Columns, and choose all the columns. Step 8: This will open a new Visual Studio instance, with a project in it. In this project add a reference to the RSSBus.SSIS2008.SharePoint assembly available in the RSSBus SSIS Components for SharePoint installation directory. Step 9: In the 'ScriptMain' class, add the System.Data.RSSBus.SharePoint namespace and go to the 'Input0_ProcessInputRow' method (this method's name may vary depending on the input name in the Script Component). Step 10: In the 'Input0_ProcessInputRow' method, you can add code to use the DownloadDocument stored procedure. Below we show the sample code: String connString = "Offline=False;Password=PASSWORD;User=USER;URL=SHAREPOINT-SITE"; String downloadDir = "C:\\Documents\\"; SharePointConnection conn = new SharePointConnection(connString); SharePointCommand comm = new SharePointCommand("DownloadDocument", conn); comm.CommandType = CommandType.StoredProcedure; comm.Parameters.Clear(); String file = downloadDir+Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@File", file)); String list = Row.ServerUrl.ToString().Split('/')[1].ToString(); comm.Parameters.Add(new SharePointParameter("@Library", list)); String remoteFile = Row.LinkFilenameNoMenu.ToString(); comm.Parameters.Add(new SharePointParameter("@RemoteFile", remoteFile)); comm.ExecuteNonQuery(); After saving your changes to the Script Component, you can execute the project and find the downloaded files in the download directory. SSIS Sample Project To help you with getting started using the SharePoint Data Provider within SQL Server SSIS, download the fully functional sample package. You will also need the SharePoint SSIS Connector to make the connection. You can download a free trial here. Note: Before running the demo, you will need to change your connection details in both the 'Script Component' code and the 'Connection Manager'.

    Read the article

  • Unity works on my PC but not on the Server. What did I miss?

    - by Erik France
    I have a web service using Microsoft Unity to hook the pieces together.  It all works fine on my PC but when I put it on the web server, I receive this error message: System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: The value of the property 'type' cannot be parsed. The error is: Method 'GetClaimsForUser' in type 'WebService.Implementation.ClaimsRetriever' from assembly 'WebService.Implementation, Version=1.0.0.0, Culture=neutral, PublicKeyToken=62cac0f1a908971a' does not have an implementation. If I look at the web.config, I see the following: <unity>     <typeAliases>       <typeAlias alias="ITokenGenerator" type="WebService.Interfaces.ITokenGenerator, WebService.Interfaces" />       <typeAlias alias="TokenGenerator" type="WebService.Implementation.TokenGenerator, WebService.Implementation" />       <typeAlias alias="IClaimsRetriever" type="WebService.Interfaces.IClaimsRetriever, WebService.Interfaces" />       <typeAlias alias="ClaimsRetriever" type="WebService.Implementation.ClaimsRetriever, WebService.Implementation" />       <typeAlias alias="TokenGeneratorSettings" type="WebService.Implementation.TokenGeneratorSettings, WebService.Implementation" />       <typeAlias alias="String" type="System.String, mscorlib" />     </typeAliases>     <containers>       <container>         <types>           <type type="ITokenGenerator" mapTo="TokenGenerator">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="retriever" parameterType="IClaimsRetriever">                   <dependency />                 </param>                 <param name="settings" parameterType="TokenGeneratorSettings">                   <dependency />                 </param>               </constructor>             </typeConfig>           </type>           <type type="IClaimsRetriever" mapTo="ClaimsRetriever">             <typeConfig extensionType="Microsoft.Practices.Unity.Configuration.TypeInjectionElement, Microsoft.Practices.Unity.Configuration">               <constructor>                 <param name="connectionStringName" parameterType="String">                   <value value="devDatabase" type="String" />                 </param>               </constructor>             </typeConfig>           </type>         </types>       </container>     </containers>   </unity> I have another web service, using an almost identical config running on the web server.  But this new web service will not run. Any ideas on what I have not told Unity to do?  Or maybe what I told Unity to do incorrectly?

    Read the article

  • Changing Silverlight application themes at runtime

    We have received a lot of questions how can the application theme be changed at run time. The most important thing here to mark is that each time the application theme is changed all the controls should be re-drawn. Without going into too much detail, we could explain the application themes as a mechanism to replace the content of the Generic.xaml file in every loaded Telerik assembly at runtime. This does not affect the controls that already have default style applied, hence the need to create new instances. Because in the Silverlight applications the RootVisual cannot be changed at run time, we need a way to reset the application UI. The following code is in App.xaml.cs. private void Application_Startup(object sender, StartupEventArgs e)     {           // Before:           // this.RootVisual = new MainPage();            this.RootVisual = new Grid();         this.ResetRootVisual();     }        public void ResetRootVisual()     {         var rootVisual = Application.Current.RootVisual as Grid;         rootVisual.Children.Clear();         rootVisual.Children.Add(new MainPage());     }   In Application_Startup() instead of creating new MainPage UserControl instance as RootVisual, we create a new Grid panel, that will contain the MainPage UserControl. In the ResetRootVisual() method we create the instance of MainPage and add it to the RootVisual panel. Then we have to create a method in the code behind which will set StyleManager.ApplicationTheme and then will call the ResetRootVisual() method: private void ChangeApplicationTheme(Theme theme) {     StyleManager.ApplicationTheme = theme;     (Application.Current as App).ResetRootVisual(); }   Here you can find an example which illustrates the described implementation of a Silverlight theme. For more information please refer to Teleriks online demos for Silverlight, the demos for WPF and help documentation for WPF and help documentation for Silverlight. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Where does a "Technical Programmer" fit in, and what does the title mean? [closed]

    - by Mike E
    Was: "What is a 'Technical Programmer'"? I've noticed in job posting boards a few postings, all from European companies in the games industry, for a "Technical Programmer". The job description was similar, having to do with tools development, 3d graphics programming, etc. It seems to be somewhere between a Technical Artist who's more technical than artist or who can code, and a Technical Director but perhaps without the seniority/experience. Information elsewhere on the position is sparse. The title seems redundant and I haven't seen any American companies post jobs by that name, exactly. One example is this job posting on gamedev.net which isn't exactly thorough. In case the link dies: Subject: Technical Programmer Frictional Games, the creators of Amnesia: The Dark Descent and the Penumbra series, are looking for a talented programmer to join the company! You will be working for a small team with a big focus on finding new and innovating solutions. We want you who are not afraid to explore uncharted territory and constantly learn new things. Self-discipline and independence are also important traits as all work will be done from home. Some the things you will work with include: 3D math, rendering, shaders and everything else related. Console development (most likely Xbox 360). Hardware implementations (support for motion controls, etc). All coding is in C++, so great skills in that is imperative. Revised Summarised Question: So, where does a programmer of this nature fit in to software development team? If I had these on my team, what tasks am I expecting them to complete? Can I ask one to build a new level editor, or optimize the rendering engine? It doesn't seem to be a "tools programmer" which focuses on producing artist tools, often in high-level languages like C#, Python, or Java. Nor does it seem to be working directly on the engine, nor a graphics programmer, as such. Yet, a strong C++ requirement, which was mirrored in other postings besides this one I quoted. Edited To Add As far as it being a low-level programmer, I had considered that but lacking from the posting was a requirement of Assembly. Instead, they tend to require familiarity with higher-level hardware APIs such as DirectX, or DirectInput. I wasn't fully clear in my original post. I think, however, that Mathew Foscarini has it right in his answer, so barring someone who definitely works with or as a "Technical Programmer" stepping in to provide a clearer explanation, I'll go with that. A generalist, which also fits the description of a more-technical-than-artist TA.

    Read the article

  • Collision detection doesn't work for automated elements in XNA 4.0

    - by NDraskovic
    I have a really weird problem. I made a 3D simulator of an "assembly line" as a part of a college project. Among other things it needs to be able to detect when a box object passes in front of sensor. I tried to solve this by making a model of a laser and checking if the box collides with it. I had some problems with BoundingSpheres of models meshes so I simply create a BoundingSphere and place it in the same place as the model. I organized them into a list of BoundingSpheres called "spheres" and for each model I create one BoundingSphere. All models except the box are static, so the box object has its own BoundingSphere (not a member of the "spheres" list). I also implemented a picking algorithm that I use to start the movement. This is the code that checks for collision: if (spheres.Count != 0) { for (int i = 1; i < spheres.Count; i++) { if (spheres[i].Intersects(PickingRay) != null && Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton) { start = true; break; } if (BoxSphere.Intersects(spheres[i]) && start) { MoveBox(0, false);//The MoveBox function receives the direction (0) and a bool value that dictates whether the box should move or not (false means stop) start = false; break; } if (start /*&& Microsoft.Xna.Framework.Input.ButtonState.Pressed == Mouse.GetState().LeftButton*/ && !BoxSphere.Intersects(spheres[i])) { MoveBox(0, true); break; } } The problem is this: When I use the mouse to move the box (the commented part in the third if condition) the collision works fine (I have another part of code that I removed to simplify my question - it calculates the "address" of the box, and by that number I know that the collision is correct). But when I comment it (like in this example) the box just passes trough the lasers and does not detect the collision (the idea is that the box stops at each laser and the user passes it forth by clicking on the appropriate "switch"). Can you see the problem? Please help, and if you need more informations I will try to give them. Thanks

    Read the article

  • Summit 2014 Registration Is Open

    - by KemButller
    Attention Oracle (employees) Field Team and Oracle JD Edwards Partners REGISTRATION IS NOW OPEN for Oracle's 5th Annual JD Edwards Summit - Monday, January 27th through Friday, January 31st, 2014, in Broomfield, Colorado. The theme of this year's Summit is "Success Through Continued Innovation”.  Our goals are to update you on our current and future product roadmap, new products, selling strategies for new prospects, growing the footprint in our JD Edwards install base, as well as providing a venue for networking.  The JD Edwards Summit is to the selling and servicing community what COLLABORATE is to the user community.  This is a MUST ATTEND event if you recommend, sell, implement and/or support the JD Edwards product, whether you are new to JD Edwards or a seasoned pro, an executive, account executive, in presales or in consulting.  The Summit promises a content-rich and unique networking experience for all attendees. Highlights include:  Monday afternoon kicks off the Summit with a variety of workshops as well as an afternoon preview of the Sponsor Showcase.  Start your networking at the Summit kickoff party Monday evening. Tuesday morning features several informative keynotes in the Summit General Assembly followed by key messages delivered in Super Sessions in the afternoon, focused on each of the JD Edwards community audiences. The educational offerings continue on Wednesday and Thursday with over 90 breakout sessions on topics spanning technology, applications (core JD Edwards, Edge, Fusion), Sales, Presales and Implementation. Friday concludes with new workshops for the implementation community.A Attendees will be enriched with numerous opportunities to network with fellow partners and Oracle throughout the week.  Consider bringing your team and using this venue to hold your own organization kickoff meeting prior to or post Summit. Contact Sheila Ebbitt (Sheila.ebbitt@oracle-DOT-com) for further assistance with your planning.  Attendees will be charged a Summit fee of US$ 250. Online registration cut-off is January 17, 2014. All registration requests after that time will be processed on-site at the event with an attendee fee of US$ 500. Please contact Rene Chapman (rene.chapman@oracle-DOT-com) for information on sponsorship opportunities. For further details on the JD Edwards Summit including agenda, workshops, educational sessions, lodging,  sponsors and Summit registration, click here! Register now! This is going to be an awesome event! John Schiff Vice President JD Edwards Business Development 

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • Does OO, TDD, and Refactoring to Smaller Functions affect Speed of Code?

    - by Dennis
    In Computer Science field, I have noticed a notable shift in thinking when it comes to programming. The advice as it stands now is write smaller, more testable code refactor existing code into smaller and smaller chunks of code until most of your methods/functions are just a few lines long write functions that only do one thing (which makes them smaller again) This is a change compared to the "old" or "bad" code practices where you have methods spanning 2500 lines, and big classes doing everything. My question is this: when it call comes down to machine code, to 1s and 0s, to assembly instructions, should I be at all concerned that my class-separated code with variety of small-to-tiny functions generates too much extra overhead? While I am not exactly familiar with how OO code and function calls are handled in ASM in the end, I do have some idea. I assume that each extra function call, object call, or include call (in some languages), generate an extra set of instructions, thereby increasing code's volume and adding various overhead, without adding actual "useful" code. I also imagine that good optimizations can be done to ASM before it is actually ran on the hardware, but that optimization can only do so much too. Hence, my question -- how much overhead (in space and speed) does well-separated code (split up across hundreds of files, classes, and methods) actually introduce compared to having "one big method that contains everything", due to this overhead? UPDATE for clarity: I am assuming that adding more and more functions and more and more objects and classes in a code will result in more and more parameter passing between smaller code pieces. It was said somewhere (quote TBD) that up to 70% of all code is made up of ASM's MOV instruction - loading CPU registers with proper variables, not the actual computation being done. In my case, you load up CPU's time with PUSH/POP instructions to provide linkage and parameter passing between various pieces of code. The smaller you make your pieces of code, the more overhead "linkage" is required. I am concerned that this linkage adds to software bloat and slow-down and I am wondering if I should be concerned about this, and how much, if any at all, because current and future generations of programmers who are building software for the next century, will have to live with and consume software built using these practices. UPDATE: Multiple files I am writing new code now that is slowly replacing old code. In particular I've noted that one of the old classes was a ~3000 line file (as mentioned earlier). Now it is becoming a set of 15-20 files located across various directories, including test files and not including PHP framework I am using to bind some things together. More files are coming as well. When it comes to disk I/O, loading multiple files is slower than loading one large file. Of course not all files are loaded, they are loaded as needed, and disk caching and memory caching options exist, and yet still I believe that loading multiple files takes more processing than loading a single file into memory. I am adding that to my concern.

    Read the article

  • Best language on Linux to replace manual tasks that use SSH/Telnet? [on hold]

    - by Calab
    I've been tasked to create and maintain a web browser based interface to replace several of the manual tasks that we perform now. I currently have a "shakey" but working program written in Perl (2779 lines) that uses basic Expect coding, but it has some limitations that require a great deal of coding to get around. Because of this I am going to do a complete rewrite and want to do it "right" this time. My question is this... What would be the best language to use to create a web based interface to perform SSH/Telnet tasks that we would normally do manually? Keep in mind the following requirements: Runs on a CentOS Linux system v5.10 Http will be served by Apache2 This is an INTRANET site and only accessible within our organization. User load will be light. No more that 5 users accessing it at one time. perl 5.8.8, php 5.3.3, python 2.7.2 are available... Not sure what other languages to check for, or what modules might be installed in each language. The web interface will need to provide progress indicators and text output produced by the remote connection, in real time as it is generated. If we are running our process on multiple hosts, they should be in individual threads so that they can run side by side, not sequentially. I want the ability to "trap" on specific text generated by the remote host and display an alert to the user - such as when the remote host generates an error message. I would like to avoid as much client side scripting (javascript/vbscript) as I can. Most users will be on Windows PC's using Chrome or IE as a browser. Users will be downloading the resulting output so they can process it as they see fit. I currently have no experience with "Ajax" or the like. Most of my coding experience is old 6809 assembly, Visual Basic 6, and whatever I can cut/paste from online examples in various languages (hence my "shaky" Perl program) My coding environment is Eclipse for remote code editing, but I prefer stuff like UltraEdit if I can get a decent syntax file for the language I'm using. I do have su access on the server, but I'm not the only one using this server so I can't just upgrade/install blindly as I might impact other software currently running on the machine. One reason that I'm asking here, instead of searching (which I did) is that most replies were, "use language 'xyz', but you need to use an external SSH connection" - like I'm using Expect in my Perl script. Most also did not agree on what language that 'xyz' should be. ...so, after this long posting, can someone offer some advice?

    Read the article

< Previous Page | 307 308 309 310 311 312 313 314 315 316 317 318  | Next Page >