Search Results

Search found 16554 results on 663 pages for 'programmers identity'.

Page 293/663 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • How to define template directives (from an API perspective)?

    - by Ralph
    Preface I'm writing a template language (don't bother trying to talk me out of it), and in it, there are two kinds of user-extensible nodes. TemplateTags and TemplateDirectives. A TemplateTag closely relates to an HTML tag -- it might look something like div(class="green") { "content" } And it'll be rendered as <div class="green">content</div> i.e., it takes a bunch of attributes, plus some content, and spits out some HTML. TemplateDirectives are a little more complicated. They can be things like for loops, ifs, includes, and other such things. They look a lot like a TemplateTag, but they need to be processed differently. For example, @for($i in $items) { div(class="green") { $i } } Would loop over $items and output the content with the variable $i substituted in each time. So.... I'm trying to decide on a way to define these directives now. Template Tags The TemplateTags are pretty easy to write. They look something like this: [TemplateTag] static string div(string content = null, object attrs = null) { return HtmlTag("div", content, attrs); } Where content gets the stuff between the curly braces (pre-rendered if there are variables in it and such), and attrs is either a Dictionary<string,object> of attributes, or an anonymous type used like a dictionary. It just returns the HTML which gets plunked into its place. Simple! You can write tags in basically 1 line. Template Directives The way I've defined them now looks like this: [TemplateDirective] static string @for(string @params, string content) { var tokens = Regex.Split(@params, @"\sin\s").Select(s => s.Trim()).ToArray(); string itemName = tokens[0].Substring(1); string enumName = tokens[1].Substring(1); var enumerable = data[enumName] as IEnumerable; var sb = new StringBuilder(); var template = new Template(content); foreach (var item in enumerable) { var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; sb.Append(template.Render(templateVars)); } return sb.ToString(); } (Working example). Basically, the stuff between the ( and ) is not split into arguments automatically (like the template tags do), and the content isn't pre-rendered either. The reason it isn't pre-rendered is because you might want to add or remove some template variables or something first. In this case, we add the $i variable to the template variables, var templateVars = new Dictionary<string, object>(data) { { itemName, item } }; And then render the content manually, sb.Append(template.Render(templateVars)); Question I'm wondering if this is the best approach to defining custom Template Directives. I want to make it as easy as possible. What if the user doesn't know how to render templates, or doesn't know that he's supposed to? Maybe I should pass in a Template instance pre-filled with the content instead? Or maybe only let him tamper w/ the template variables, and then automatically render the content at the end? OTOH, for things like "if" if the condition fails, then the template wouldn't need to be rendered at all. So there's a lot of flexibility I need to allow in here. Thoughts?

    Read the article

  • What is wrong with Unix/linux

    - by John Smith
    This is a genuine question motivated by Ideal Operating System When I moved from DOS to Linux in the late 90s it was an eye opener for me. Long file names, arbitrarily many extensions etc... Now I look at Linux and Unix and see all sorts of issues. Here are things I see which could be fixed. Too much depends on root, and rootly powers cannot be voluntarily delegated over several users. (I would love to give up my power to manager printers and delegate the job to another account) File permissions are very limited, and there is not much metadata to go with files. The "everything is a file" metaphor is not true, Plan 9 gets it right(er).

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • Understanding Data Binding for Windows Phone 7

    - by nikhil
    I want to develop a simple app for the Windows Phone 7 platform. It's basically a vocabulary based game that involves the user moving word tiles from one area to another to score points. I want to know what is the best way of tying the UI to the game's backend? I saw the Windows Phone 7 jumpstart videos, there they touch up on Data Binding but don't really go into any depth. I'm a newbie and don't have any experience with designing the architecture for a phone app, It'd be great if someone could explain what steps I should be taking or guide me to a resource from where I could learn more.

    Read the article

  • How to suggest using an ORM instead of stored procedures?

    - by Wayne M
    I work at a company that only uses stored procedures for all data access, which makes it very annoying to keep our local databases in sync as every commit we have to run new procs. I have used some basic ORMs in the past and I find the experience much better and cleaner. I'd like to suggest to the development manager and rest of the team that we look into using an ORM Of some kind for future development (the rest of the team are only familiar with stored procedures and have never used anything else). The current architecture is .NET 3.5 written like .NET 1.1, with "god classes" that use a strange implementation of ActiveRecord and return untyped DataSets which are looped over in code-behind files - the classes work something like this: class Foo { public bool LoadFoo() { bool blnResult = false; if (this.FooID == 0) { throw new Exception("FooID must be set before calling this method."); } DataSet ds = // ... call to Sproc if (ds.Tables[0].Rows.Count > 0) { foo.FooName = ds.Tables[0].Rows[0]["FooName"].ToString(); // other properties set blnResult = true; } return blnResult; } } // Consumer Foo foo = new Foo(); foo.FooID = 1234; foo.LoadFoo(); // do stuff with foo... There is pretty much no application of any design patterns. There are no tests whatsoever (nobody else knows how to write unit tests, and testing is done through manually loading up the website and poking around). Looking through our database we have: 199 tables, 13 views, a whopping 926 stored procedures and 93 functions. About 30 or so tables are used for batch jobs or external things, the remainder are used in our core application. Is it even worth pursuing a different approach in this scenario? I'm talking about moving forward only since we aren't allowed to refactor the existing code since "it works" so we cannot change the existing classes to use an ORM, but I don't know how often we add brand new modules instead of adding to/fixing current modules so I'm not sure if an ORM is the right approach (too much invested in stored procedures and DataSets). If it is the right choice, how should I present the case for using one? Off the top of my head the only benefits I can think of is having cleaner code (although it might not be, since the current architecture isn't built with ORMs in mind so we would basically be jury-rigging ORMs on to future modules but the old ones would still be using the DataSets) and less hassle to have to remember what procedure scripts have been run and which need to be run, etc. but that's it, and I don't know how compelling an argument that would be. Maintainability is another concern but one that nobody except me seems to be concerned about.

    Read the article

  • Is it a waste of time to free resources before I exit a process?

    - by Martin
    Let's consider a fictional program that builds a linked list in the heap, and at the end of the program there is a loop that frees all the nodes, and then exits. For this case let's say the linked list is just 500K of memory, and no special space managing is required. Is that a wast of time, because the OS will do that anyway? Will there be a different behavior later? Is that different according to the OS version? I'm mainly interested in UNIX based systems, but any information will be appreciated. I had today my first lesson in OS course and I'm wondering about that now.

    Read the article

  • How do you navigate and refactor code written in a dynamic language?

    - by Philippe Beaudoin
    I love that writing Python, Ruby or Javascript requires so little boilerplate. I love simple functional constructs. I love the clean and simple syntax. However, there are three things I'm really bad at when developing a large software in a dynamic language: Navigating the code Identifying the interfaces of the objects I'm using Refactoring efficiently I have been trying simple editors (i.e. Vim) as well as IDE (Eclipse + PyDev) but in both cases I feel like I have to commit a lot more to memory and/or to constantly "grep" and read through the code to identify the interfaces. As for refactoring, for example changing method names, it becomes hugely dependent on the quality of my unit tests. And if I try to isolate my unit tests by "cutting them off" the rest of the application, then there is no guarantee that my stub's interface stays up to date with the object I'm stubbing. I'm sure there are workarounds for these problems. How do you work efficiently in Python, Ruby or Javascript?

    Read the article

  • Time tracking and payment registration architecture

    - by egis
    ?itle might be a little bit incorrect. :) Anyway, I'm building a software where employees input time they worked per day (work hours) and employer "pays" for this time. "Payment" is done outside this system, so employer just "confirms" (checkbox or something like this) which work hours are paid. So the question is - what is the best way (both UI and data storage wise) to implement this? At the moment I have this idea: Employee selects week and manually (with some Javascript helpers, like "fill the same time for all days") inputs work hours in every day of the week. Employer confirms payment the same way employee inputs data (selects week, confirms each day). Data is saved in DB as unix timestamp (one day per table row). Problem is 14 inputs (7 days * ("hours from" + "hours to" input), yet this approach seems kinda easy to implement. Maybe I'm overlooking something and this can be done differently and better? Maybe someone has any example of already working software?

    Read the article

  • Live programming help

    - by frazras
    This idea has been floating around my head for a few years. I started some work on it but I just want to know if it is feasible, sensible, or if there is something else like it out there. Dont want to know I was wasting time on a solved issue. Whenever I have a programming issue, this is my sequence: Google it!: That usually brings up a lot of things: blogs, forums, stackoverflow, stackexchange, and even the official docs of the language/framework/cms. Ask on IRC: I format my question and try to get people on IRC to help me. Make a post: I create a post on forums/stackoverflow/stackexchange or shout on twitter with hashtags. Now a lot of the time I am in the middle of a project with a deadline. So I want answers NOW!!! Sometimes just 5-15 minutes worth of attention. Usually by the time I am failing at getting answers at #2, I am imagining how many people are ONLINE NOW with the skill and my exact answer but playing video games, watching youtube or idling online. However, if they were motivated, they would invest the 15 mintes helping me, that would make a world of a difference. I am even in positions where I would PAY for that 15 minutes of instant help. If your rate is as much as $100/hour (relatively good programmer) that is $25 that might save me 3 hours. This help would be live, text chat/skype/phone/screenshare. Should I continue developing this idea or is there a better alternative out there? Or is this even an unfeasible idea?

    Read the article

  • from MS Biology to BS Computer Science [on hold]

    - by Air Borne
    I'm Marco from Italy and I'd like to ask you a piece of advice about my career. I hold a Ms degree in Biology, I enjoyed a lot studying it and I got very good grades but I didn't know what to do with my degree in the real life. Few months ago, I began to read a book about Python programming (Introduction to Computer Science, Zelle J.) and I've great fun learning Python as a beginner, I wake up in the morning thinking about doing excersies and writing simple programs with python :) I'm also watching free lectures from MIT open courseware, and I'm feeling a certain degree of regrets for never asking myself what was computer science, since it seems to me it's a magic world. After weeks of doubts, I made a move :) I applied for a CS bachelor degree abroad, I got an interview and I'm going to start this great adventure next September. I feel incredibly excited at it, but a little bit scared too. Scared because sometimes I think I'm making a great mistake for my life restarting from a bachelor in a completely different area of study. Sometimes I hear people saying the IT market is bad, sometimes I hear other ones saying quite the opposite instead. Moreover, some colleagues of mine suggested me to try to get into Bioinformatics, instead of CS. My question is: I want to really discover if CS is for me, I mean the passion of my life. I know I'm just a beginner and I can't say nothing about it yet. What do you suggest me: CS or Bioinformatics? If I get a Bs in CS, could I get into bioinformatics without relevant experience, taking into account I have a Ms Biology degree? Any comment is appreciated, thanks in advance.

    Read the article

  • Understanding how memory contents map into a struct

    - by user95592
    I am not able to understand how bytes in memory are being mapped into a struct. My machine is a little-endian x86_64. The code was compiled with gcc 4.7.0 from the Win64 mingw32-64 distribution for Win64. These are contents of the relevant memory fragment: ...450002cf9fe5000040115a9fc0a8fe... And this is the struct definition: typedef struct ip4 { unsigned int ihl :4; unsigned int version :4; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; // flags=3 bits, offset=13 bits uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ } ip4_t; When a pointer to such an structure (let it be *ip4) is initialized to the starting address of the above pasted memory region, this is what the debugger shows for the struct's fields: ip4: address=0x8da36ce ip4->ihl: address=0x8da36ce, value=0x5 ip4->version: address=0x8da36ce, value=0x4 ip4->tos: address=0x8da36d2, value=0x9f ip4->tot_len: address=0x8da36d4, value=0x0 ... I see how ihl and version are mapped: 4 bytes for a long integer, little-endian. But I don't understand how tos and tot_len are mapped; which bytes in memory correspond to each one of them. Thank you in advance.

    Read the article

  • Functional Programming - Does Knowing It Help Job Prospects?

    - by Jetti
    The main language that I use at the moment is C# and I am the most comfortable with it. However, I have started dabbling in F# and Haskell and really enjoy those langauges. I would love to improve my skills in either of those languages over time since it truly is fun for me to use them (as opposed to Ruby, which is hyped as "fun", I just don't get where the fun is, but I digress...). My question is directed at those who have hired/interviewed for programming positions (junior/mid-level): if you see a functional programming language on a resume, does it affect your opinion (whether positive or negatively) of that candidate? My rationale for knowledge of functional programming affecting the view of a candidate is because it can show that the candidate can adapt to different methodologies and take a mulit-dimensional approach to problems rather than the "same old OO approach". (This may be off-base, please let me know if this assumption is as well!)

    Read the article

  • What is the difference between debugging and testing?

    - by persepolis
    Introduction To Software Testing (Ammann & Offutt) mentions on p.32 a 5-level testing maturity model: Level 0 There’s no difference between testing and debugging. Level 1 The purpose of testing is to show that the software works. Level 2 The purpose of testing is to show that the software doesn’t work. Level 3 The purpose of testing is not to prove anything specific, but to reduce the risk of using the software. Level 4 Testing is a mental discipline that helps all IT professionals develop higher quality software. Although they don't go into much further detail. What are the differences between debugging and testing?

    Read the article

  • How should I manage my time?

    - by Tathagata
    There are times when just one bug that keeps eating away your time like hell ... for example this one. I generally end up wasting hours and realize I've gone terribly behind my schedule and not completed other tasks. With n number of tabs open in the browser, I end up posting the question in stackoverflow as a last resort. What are some time management techniques that lets you stop, rewind and get back in action when faced with a road block?

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • Are RSS feeds used? [closed]

    - by acidzombie24
    I was thinking about implementing RSS feeds on my site. The one thing that came to mind is Are RSS feeds ever used?! I don't use them nor know anyone who does. I know to use them it must be through an rss feed program or built in with the browser. I like my browser clean and i know many ppl dont know how to use/configure their browser. So with my thoughts i deducted that rss are not used 99.9999% of people (thats 4 decimal places which is really saying something). Now twitter on the other hand or even email may be used. But I have doubts about rss feeds. Can anyone give me statistics or change my mind on implementing it? I suspect it would be simple but i dont think i should bother if no one will ever use it. I dont even use SE/SO RSS feeds.

    Read the article

  • Looking for new language and new technology [closed]

    - by Basim
    back when Microsoft relased .Net in 2002 or whatever, when I look at that time I say to myself what I if I picked one of Microsoft language in that time and still work on it, of course I will be professional by now. I am looking for a new language that is going up and will be big thing in the next 5-10 years, so in that time i can see the big picture and I know that I'm one of the few people who started from the beginning with X programming language or technology. My interest is web development.

    Read the article

  • so, my Cassandra consultant left me and now I'm thinking about reverting to mysql

    - by sathia
    I run a middle size community and some time ago I started to develop social capabilities such as follow, status update, wall etc. For some reason i thought that Cassandra was the right tool for the job so I looked online for a Cassandra developer and I found a very talented one. Unluckily in the midst of the development the dev left (too much jobs) and so I'm here with a very nice class, a very nice demo, but a lot of fears that I won't be able to handle basic things such as compaction, scaling etc. My biggest fear is to go online with all this coolness and then having a site inaccessible for hours or days. The mysql consultant (very talented too) keeps saying me that I should stick with Mysql which I know rather well and in case something's wrong we can manage. In that case I should take the class made for cassandra and abstract it for Mysql. My question is this: Should I find another dev/consultant and stick with Cassandra because for social things it is definitely the best tool for the job, or should I listen to the Mysql consultant and revert to Mysql? About 15k login each day Average 20 actions per user Avg 6 followers x user (These are current statistics, but of course I'd like to increase them as much as possible.)

    Read the article

  • Update Variable in TeamCity powershell script

    - by Jake Rote
    I am try to update an enviroment variable in teamcity using powershell code. But it does not update the value of the variable. How can i do this? My current code is (It gets the currentBuildNumber fine: $currentBuildNumber = "%env.currentBuildNumber%" $newBuildNumber = "" Write-Output $currentBuildNumber If ($currentBuildNumber.StartsWith("%MajorVersion%") -eq "True") { $parts = $currentBuildNumber.Split(".") $parts[2] = ([int]::Parse($parts[2]) + 1) + "" $newBuildNumber = $parts -join "." } Else { $newBuildNumber = '%MajorVersion%.1' } //What I have tried $env:currentBuildNumber = $newBuildNumber Write-Host "##teamcity[env.currentBuildNumber '$newBuildNumber']" Write-Host "##teamcity[setParameter name='currentBuildNumber' value='$newBuildNumber']"

    Read the article

  • Finding most Important Node(s) in a Directed Graph

    - by Srikar Appal
    I have a large (˜ 20 million nodes) directed Graph with in-edges & out-edges. I want to figure out which parts of of the graph deserve the most attention. Often most of the graph is boring, or at least it is already well understood. The way I am defining "attention" is by the concept of "connectedness" i.e. How can i find the most connected node(s) in the graph? In what follows, One can assume that nodes by themselves have no score, the edges have no weight & they are either connected or not. This website suggest some pretty complicated procedures like n-dimensional space, Eigen Vectors, graph centrality concepts, pageRank etc. Is this problem that complex? Can I not do a simple Breadth-First Traversal of the entire graph where at each node I figure out a way to find the number of in-edges. The node with most in-edges is the most important node in the graph. Am I missing something here?

    Read the article

  • Software/Hardware Development?

    - by SwarthyMantooth
    Sincere apologies if this is the wrong place to ask this. I am a computer engineering student and I'm currently on my first co-op of the required 5 I have to take, and I've noticed that all I'm really given is software engineering jobs. I love developing software, but I don't want to lose out on the hardware aspect of computers, as having a hybrid knowledge of the two is why I chose this major in the first place. So I guess my question is: Are there any software engineering jobs that still allow you to handle and interface with hardware on a very low level? Or am I to be forced to choose which focus I love more?

    Read the article

  • What's the difference between MCSD and MCPD?

    - by Damien
    I am looking on the website but the difference between the two isn't explained. To complicate matters a Google search tells me that developers should focus on the MCPD qualifications but an examination of the MCPD section of the site is telling me that the Web Application exams will expire in July 2013! It seems out of date. Meanwhile the MCSD exam seems to be a lot more up to date with references to HTML5/ASP.net 4.5 MVC applications! What's the difference. Are the MCPD going to be updated after July? Thanks

    Read the article

  • Could someone break this nasty habit of mine please?

    - by MimiEAM
    I recently graduated in cs and was mostly unsatisfied since I realized that I received only a basic theoretical approach in a wide range of subjects (which is what college is supposed to do but still...) . Anyway I took the habit of spending a lot of time looking for implementations of concepts and upon finding those I will used them as guides to writing my own implementation of those concepts just for fun. But now I feel like the only way I can fully understand a new concept is by trying to implement from scratch no matter how unoptimized the result may be. Anyway this behavior lead me to choose by default the hard way, that is time consuming instead of using a nicely written library until I hit my head again a huge wall and then try to find a library that works for my purpose.... Does anyone else do that and why? It seems so weird why would anyone (including me) do that ? Is it a bad practice ? and if so how can i stop doing that ?

    Read the article

  • Good practice or service for monitoring unhandled application errors for a small organization

    - by palto
    I'm working with multiple software with varying ways of monitoring for errors. When I make software, I usually send email with the stack trace to admins(usually me). Some customer software is monitored by a team who check that a particular batch run was successfull. Other software might not have any monitoring at all(someone will call when things go wrong horribly). Sending emails is good, except when things start going wrong, my mail gets filled fast. Also I don't want to solve the same problem in code for every software. Is there some relatively cheap and low maintenance software or practice to handle this. I want it to be cheap/low maintenance because usually I work alone or in teams of 5 or smaller. For example it would be great if errors would be aggregated so I don't get 10 000 emails when something unexpected happens... For clarification: By unhandled errors I mean Exceptions that were unhandled by application code that were propagated to Tomcat or Jboss. I don't need help with how to catch those errors. I need help with what to do with them. Is there any cloud application that I could send my errors to? Or some simple server to install? Or some library that can handle errors using configuration files. I use Java if that is any help.

    Read the article

  • Current trends in Random Access Memory

    - by Nutel
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >