Search Results

Search found 14074 results on 563 pages for 'programmers'.

Page 184/563 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • how do I write a functional spec quickly and efficiently

    - by giddy
    So I just read some fabulous articles by Joel on specs here. (Was written in 2000!!) I read all 4 parts, but Im looking for some methodical approaches to writing my specs. Im the only lonely dev, working on this fairly complicated app (or family of apps) for a very well known finance company. I've never made something this serious, I started out writing something like a bad spec, an overview of some sorts, and it has wasted a LOT of my time. Ive also made 3 mockup-kinda-thingies for my client so I have a good understanding of what they want. Also released a preview (a throw away working app with the most basic workflow), and Ive only written and tested some of the very core/base systems. I think the mistake Ive been making so far is not writing a detailed spec, so Im getting to it now. So the whole thing comprises of An MVC website (for admins & data viewing) 2 Silverlight modules (For 2 specific tasks) 1 Desktop Application Im totally short on time, resources and need to get this done quick, also, need to make sure these guys read it up equally quick and painlessly. So how do I go about it, Im looking for any tips, any real world stuff, how do you guys usually do it? Do you make a mock screenie of every dialog/form/page? Im thinking of making a dummy asp.net web forms project, then filling in html files in folders and making it look like my mvc url structure. Then having a section in the spec for the website and write up a page for every URL Ive got with a screenie. For my win forms app, Ive made somewhat of a demo Win Form project, would I then put in a dialog or stucture everything as I would in the real app and then screen shot it?

    Read the article

  • Are SQL Injection vulnerabilities in a PHP application acceptable if mod_security is enabled?

    - by Austin Smith
    I've been asked to audit a PHP application. No framework, no router, no model. Pure PHP. Few shared functions. HTML, CSS, and JS all mixed together. I've discovered numerous places where SQL injection would be easily possible. There are other problems with the application (XSS vulnerabilities, rampant inline CSS, code copy-pasted everywhere) but this is the biggest. Sometimes they escape inputs, not using a prepared query or even mysql_real_escape_string(), mind you, but using addslashes(). Often, though, their queries look exactly like this (pasted from their code but with columns and variable names changed): $user = mysql_query("select * from profile where profile_id='".$_REQUEST["profile_id"]."'"); The developers in question claimed that they were unable to hack their application. I tried, and found mod_security to be enabled, resulting in HTTP 406 for some obvious SQL injection attacks. I believe there to be sophisticated workarounds for mod_security, but I don't have time to chase them down. They claim that this is a "conceptual" matter and not a "practical" one since the application can't easily be hacked. Their internal auditor agreed that there were problems, but emphasized the conceptual nature of the issues. They also use this conceptual/practical argument to defend against inline CSS and JS, absence of code organization, XSS vulnerabilities, and massive amounts of repetition. My client (rightly so, perhaps) just wants this to go away so they can launch their product. The site works. You can log in, do what you need to do, and things are visibly functional, if slow. SQL Injection would indeed be hard to do, given mod_security. Further, their talk of "conceptual vs. practical" is rhetorically brilliant, considering that my client doesn't understand web application security. I worry that they've succeeded in making me sound like an angry puritan. In many ways, this is a problem of politics, not technology, but I am at a loss. As a developer, I want to tell them to toss the whole project and start over with a new team, but I face a strong defense from the team that built it and a client who really needs to ship their product. Is my position here too harsh? Even if they fix the SQL Injection and XSS problems can I ever endorse the release of an unmaintainable tangle of spaghetti code?

    Read the article

  • How to gain Professional Experience in Java/Java EE Development

    - by Deepak Chandrashekar
    I have been seeing opportunities go past me for just 1 reason: not having professional industry experience. I say to many employers that I'm capable of doing the job and show them the work I've done during the academics and also several personal projects which I took extra time and effort to teach myself the new industry standard technologies. But still, all they want is some 2-3 years experience in an industry. I'm a recent graduate with a Master's Degree in Computer science. I've been applying for quite a few jobs and most of these jobs require 2 years minimum experience. So, I thought somebody here might give me some realistic ideas about getting some experience which can be considered professional. Any kind of constructive comments are welcome.

    Read the article

  • How to Mentor a Junior Developer

    - by Josh Johnson
    This title is a little broad but I may need to give a little background before I can ask my question properly. I know that similar questions have been asked here already. But in my case I'm not asking if I should be mentoring someone or if the person is a good fit for being a software developer. That is not my place to judge. I have not been asked outright, but it is apparent that myself and other fellow senior developers are to mentor the new developers that start here. I have no problem with this whatsoever and, in many cases, it lends me a fresh perspective on things and I end up learning in the process. Also, I remember how beneficial it was in the beginning of my career when someone would take some time to teach me something. When I say "new developer" they could be anywhere from fresh out of college to having a year or two of experience. Recently and in the past we've had people start here who seem to have an attitude toward development/programming which is different from mine and hard for me to reconcile; they seem to extract just enough information to get the task done but not really learn from it. I find myself going over and over the same issues with them. I understand that part of this could be a personality thing, but I feel it's my job to do my best and sort of push them out of the nest while they're under my wing, so to speak. How can I impart just enough information so that they will learn but not give so much as to solve the problem for them? Or perhaps: What's the proper response to questions that are designed to take the path of least resistance and, in essence, force them to learn instead of take the easy way out? These questions are probably more general teaching questions and don't have that much to do specifically with software development. Note: I do not get a say in what tasks they are working on. Management doles the task out and it could be anything from a very simple bug fix to starting an entire application by themselves. While this is not ideal by any means and obviously presents its own gauntlet of challenges, I feel it's a topic best left for another question. So the best I can do is help them with the problem at hand and try to help them break it down into simpler problems and also check their commit logs and point out mistakes that they made. My main objectives are to: Help them out and give them the tools they need to start becoming more self-reliant. Steer them in the right direction and break bad development habits early on. Lessen the amount of time I spend with them (the personality type described above seems to need much more one-on-one time and does not do well over IM or email. While that's generally fine, I can't always stop what I'm working on, break my stride, and help them debug an error on a moments notice; I have my own projects that need to get done).

    Read the article

  • How abstract should you get with BDD

    - by Newton
    I was writing some tests in Gherkin (using Cucumber/Specflow). I was wondering how abstract should I get with my tests. In order to not make this open-ended, which of the following statements is better for BDD: Given I am logged in with email [email protected] and password 12345 When I do something Then something happens as opposed to Given I am logged in as the Administrator When I do something Then something happens The reason I am confused is because 1 is more based on the behaviour (filing in email and password) and 2 is easier to process and write the tests.

    Read the article

  • What is Object Oriented Programming ill-suited for?

    - by Richard JP Le Guen
    In Martin Fowler's book Refactoring, Fowler speaks of how when developers learn something new, they don't consider when it's inappropriate for the job: Ten years ago it was like that with objects. If someone asked me when not to use objects, it was hard to answer. [...] It was just that I didn't know what those limitations were, although I knew what the benefits were. Reading this, it occurred to me I don't know what the limitations or potential disadvantages of Object-Oriented Programming are. What are the limitations of Object Oriented Programming? When should one look at a project and think "OOP is not best suited for this"?

    Read the article

  • Experience vs. versatility

    - by Florin Bombeanu
    Let's say a .NET programmer works at a company which provides software on demand, not as a product. The programmer works in WPF for a period of time and he/she invests lots of time in it. He/she get very good at WPF and Windows Forms and desktop development in general. But the company has to provide a web application now, so the developer has to learn MVC or Web Forms. He/she is not experienced in web development so he/she starts investing time in this new technology and in time they get good at it. But this time the company has to provide a Sharepoint solution, and so on. What is more important: Being very very good at a certain technology, Or be as versatile as possible knowing less in each technology but covering a greater area of expertise? Should the programmer keep studying and working in WPF until he/she reaches a guru level or is it a good thing that they had to learn other technologies as well? I agree with those of you who will say that when learning different technologies you will also learn things which are useful no matter the technology you're programming in. But eventually, when the programmer will want to change jobs, will it matter more that he/she knows some WPF, MVC or Sharepoint than the fact that he/she is insanely good at one of them? I would think the second one is more important since most companies are looking for a developer for a certain technology. I don't think there are many companies looking for technical know-it-all people. What do you think?

    Read the article

  • Explanation of the definition of interface inheritance as described in GoF book

    - by Geek
    I am reading the first chapter of the Gof book. Section 1.6 discusses about class vs interface inheritance: Class versus Interface Inheritance It's important to understand the difference between an object's class and its type. An object's class defines how the object is implemented.The class defines the object's internal state and the implementation of its operations.In contrast,an object's type only refers to its interface--the set of requests on which it can respond. An object can have many types, and objects of different classes can have the same type. Of course, there's a close relationship between class and type. Because a class defines the operations an object can perform, it also defines the object's type . When we say that an object is an instance of a class, we imply that the object supports the interface defined by the class. Languages like c++ and Eiffel use classes to specify both an object's type and its implementation. Smalltalk programs do not declare the types of variables; consequently,the compiler does not check that the types of objects assigned to a variable are subtypes of the variable's type. Sending a message requires checking that the class of the receiver implements the message, but it doesn't require checking that the receiver is an instance of a particular class. It's also important to understand the difference between class inheritance and interface inheritance (or subtyping). Class inheritance defines an object's implementation in terms of another object's implementation. In short, it's a mechanism for code and representation sharing. In contrast,interface inheritance(or subtyping) describes when an object can be used in place of another. I am familiar with the Java and JavaScript programming language and not really familiar with either C++ or Smalltalk or Eiffel as mentioned here. So I am trying to map the concepts discussed here to Java's way of doing classes, inheritance and interfaces. This is how I think of of these concepts in Java: In Java a class is always a blueprint for the objects it produces and what interface(as in "set of all possible requests that the object can respond to") an object of that class possess is defined during compilation stage only because the class of the object would have implemented those interfaces. The requests that an object of that class can respond to is the set of all the methods that are in the class(including those implemented for the interfaces that this class implements). My specific questions are: Am I right in saying that Java's way is more similar to C++ as described in the third paragraph. I do not understand what is meant by interface inheritance in the last paragraph. In Java interface inheritance is one interface extending from another interface. But I think the word interface has some other overloaded meaning here. Can some one provide an example in Java of what is meant by interface inheritance here so that I understand it better?

    Read the article

  • Type dependencies vs directory structure

    - by paul
    Something I've been wondering about recently is how to organize types in directories/namespaces w.r.t. their dependencies. One method I've seen, which I believe is the recommendation for both Haskell and .NET (though I can't find the references for this at the moment), is: Type Type/ATypeThatUsesType Type/AnotherTypeThatUsesType My natural inclination, however, is to do the opposite: Type Type/ATypeUponWhichTypeDepends Type/AnotherTypeUponWhichTypeDepends Questions: Is my inclination bass-ackwards? Are there any major benefits/pitfalls of one over the other? Is it just something that depends on the application, e.g. whether you're building a library vs doing normal coding?

    Read the article

  • Dual Inspection / Four Eyes Principle

    - by Ralf
    I have the requirement to implement some kind of dual inspection or four-eyes principle as a feature of my software, meaning that every change of an object done by user A has to be checked by user B. A trivial example would be a publishing system where an author writes an article and another has to proofread it before it is published. I am a little bit surprised that you find nearly nothing about it on the net. No patterns, no libraries (besides cibet), no workflow solutions etc. Is this requirement really so uncommon? Or am I searching for the wrong terms? I am not looking for a specific solution. More for a pattern or best practice approach. Update: the above example is really trivial. Let's add some more complexity to it. The article has been published, but it now needs an update. Putting the article offline for the update is not an option, but the update has to be proof read, too.

    Read the article

  • how to improve design ability

    - by Cong Hui
    I recently went on a couple of interviews and all of them asked a one or two design questions, like how you would design a chess, monopoly, and so on. I didn't do good on those since I am a college student and lack of the experience of implementing big and complex systems. I figure the only way to improve my design capability is to read lots of others' code and try to implement myself. Therefore, for those companies that ask these questions, what are their real goals in this? I figure most of college grads start off working in a team guided by a senior leader in their first jobs. They might not have lots of design experience fresh out of colleges. Anyone could give pointers about how to practice those skills? Thank you very much

    Read the article

  • What should be included in risk management section of software's architecture documentation?

    - by Limbo Exile
    I am going to develop a Java application (a Spring Web application that will be used to extract data from various data sources) and I want to include risk management of the software in the architecture documentation. By risk management (I am not sure if this is the right name) I mean documenting possibilities of what can go wrong with the software and what to do in those cases. At first I tried to draft some lists, including things like database performance decrease, change of external components that the software interacts with, security breaches etc. But as I am not an experienced developer I cannot rely on those drafts, I don't think they are exhaustive. I searched web hoping to find something similar to the Joel Test or to find any other resource that will cite the most popular causes of problems that should be included and analyzed in risk management documentation, but I haven't found much. Finally, my question is: What should be included in risk management section of software's architecture documentation?

    Read the article

  • TTS on App Engine

    - by yati sagade
    I have written a small front-end to the Festival TTS system using Python/Django. I wish to deploy it on the Google App Engine cloud. A few questions: My application uses the Festival app 'text2wave'. Will is work on the cloud? I have used Python primitives like subprocess.call() to invoke the aforementioned program. Will that work? If your answer to any or both of (1) and (2) is no, is there a free api on the web that I can use (from the appengine)? I read somewhere about placing calls from Phono to a Voxeo backend, but I'm not sure what that means. I am aware of the Google Translate extension that allows translation using an HTTP GET (REST) request, but here the text is limited to 100 chars. Bad. Plus, they may take it down any point of time.

    Read the article

  • What benefits can I get upgrading my ASP.NET (Webform) + DAL(EF) + Repository + BLL structure to MVC?

    - by Etienne
    I'm in the process of defining an approach that may best fit our needs for a big web application development. For now, I'm thinking going with an ASP.NET Architecture with a DAL using Entity Framework, a Repository concept to not access DAL directly from BLL and a BLL that call the repository and make every manipulations necessary to prepare data to push in a presentation layer (.aspx files). I don't plan to use ASP.Net controls and prefer to keep things simple and lightweight using plain html, jQuery UI controls and do most of the server calls with jQuery Ajax. Sometimes, when needed, I plan to use handlers (.ashx) to call BLL methods that will return JSON or HTML to client for dynamic stuff. My solution also has a test project that Mock the Repository with in-memory data to not repose on database for testing BLL methods... It may be usefull to add that we will build a big application over this architecture with hundreds of tables and store procedures with a lot of reading and writing to database. My question is, having this architecture in mind, Is there any evident advantages that I can obtain by using an MVC3 project instead of the described architecture base on Webform? Do you see any problem in this architecture that may cause us problem during the next steps of development? I know the MVC pattern for using it in others projects with Django... but the Microsoft MVC implementation look so much more complex and verbose than Django MVC and it's why I'm hesitating (or waiting for a little push?) right now before jumping into it... We are in a real project with deadlines and don't want to slow the development process without any real benefits.

    Read the article

  • what's included in a typical computer architecture class? [closed]

    - by sq1020
    Does this description fit what's usually included in a computer architecture class? Computer Organization and Assembly Language An introduction to the hardware organization and assembly language of the Intel processor. Topics include memory hierarchy and design- CPU design- pipelining- addressing modes- subroutine linkage- polled input/output- interrupts- high level language interfacing and macros.

    Read the article

  • Legal concern over "borrowed" code

    - by iandisme
    A company my friend works for (let's call him Me) recently unveiled a new face for their internal "networking" website. This new face looks remarkably like Facebook, and indeed, examination of the source code reveals that it's almost identical: The code, class names, and even the fonts are the same. There is also no indication that Facebook is in any way involved or aware. I know this is unethical, but is it illegal? I can't find anything concrete about this to help Me decide what to do about it. EDIT: We're talking front-end code. It does not appear to be linking to Facebook in any way.

    Read the article

  • Diagram to show code responsibility

    - by Mike Samuel
    Does anyone know how to visually diagram the ways in which the flow of control in code passes between code produced by different groups and how that affects the amount of code that needs to be carefully written/reviewed/tested for system properties to hold? What I am trying to help people visualize are arguments of the form: For property P to hold, nd developers have to write application code, Ca, without certain kinds of errors, and nm maintainers have to make sure that the code continues to not have these kinds of errors over the project lifetime. We could reduce the error rate by educating nd developers and nm maintainers. For us to be confident that the property holds, ns specialists still need to test or check |Ca| lines of code and continue to test/check the changes by nm maintainers. Alternatively, we could be confident that P holds if all code paths that could violate P went through tool code, Ct, written by our specialists. In our case, test suites alone cannot give confidence that P holdsnd » nsnm ns|Ca| » |Ct| so writing and maintaining Ct is economical, frees up our developers to worry about other things, and reduces the ongoing education commitment by our specialists. or those conditions do not hold, so focusing on education and testing is preferable. Example 1 As a concrete example, suppose we want to ensure that our web-service only produces valid JSON output. Our web-service provides several query and mutation operators that can be composed in interesting ways. We could try to educate everyone who maintains those operations about the JSON syntax, the importance of conformance, and libraries available so that when they write to an output buffer, every possible sequence of appends results in syntactically valid JSON. Alternatively, we don't expose an output stream handle to application code, and instead expose a JSON sink so that every code path that writes a response is channeled through a JSON sink that is written and maintained by a specialist who knows JSON syntax and can use well-written libraries to produce only valid output. Example 2 We need to make sure that a service that receives a URL from an untrusted source and tries to fetch its content does not end up revealing sensitive files from the file-system, like file:///etc/passwd. If there is a single standard way that any developer familiar with the application language's libraries would use to fetch URLs, which has file-system access turned off by default, then simply educating developers about the standard mechanism, and testing that file probing fails for some inputs, will probably be sufficient.

    Read the article

  • Project Codenames - Yea or Nay?

    - by rmx
    Where I work, most of our projects have (or at least attempt) descriptive, useful names. However we have a few with names that make no sense: I found that an assembly named WiFi which actually has nothing whatsoever to do with wi-fi, but is a codename. When I asked why, I was told that it's to protect company secrets incase some intern has few too many at the pub on Friday and starts chatting about the brand new 'WiFi' project he's been working on. Its clear that some people find enjoyment in finding silly / amusing codenames for their projects (like in this question). My question is: is it really a good idea to use codenames for your projects or are you better off spending the time to decide upon a descriptive name? My opinion is that in the long-run its better to give your projects relevant names. My reasoning is that if you can't think of a decent name, perhaps you don't really know the requirements well enough. I think there are better ways to 'protect company secrets' and I find it quite confusing when the name does not correlate at all with the content. It's just common sense, surely?! So do you use codenames and what the your reasons for or against this seemingly common, yet annoying (to me at least) practice?

    Read the article

  • Health problem of a programmer

    - by gunbuster363
    Hi all, I've been annoyed by this fingers ache for quite a long time, my fingers ache because of too much mouse clicking during office hour plus play games after work. I forget game for a while and my fingers are getting better, but still my right pointing finger would feel pressure when I click the mouse. I haven't go to a doctor because I afraid the fee would be high and he would just suggest me too get rest for the fingers, also, I don't know what kind of doctor should I go and see. My fingers get less pressure if I use my expensive deathadder ( what a shame, I bought this for gaming, but now I use it for rest ) at home because its buttons are softer, however I cannot have such expensive mouse at my office because I am afraid people would steal it. I use some trick when I am using the mouse such as single-click open a file, adding more shortcuts at desktop for common jobs, do you guys have some other tips for me? Thank you.

    Read the article

  • Algorithm to use for shop floor layout?

    - by jkohlhepp
    I ran into a classroom problem yesterday (business oriented class, not computer science) and I found it interesting from an algorithmic perspective. The problem goes something like this: Assume there is a shop floor with N different rooms, and you have N different departments that need to go in those rooms. The departments and the rooms are all the same size, so any department could go in any room. There is a known travel distance from each room to each other room. There is also a known amount of trips necessary from one department to another (trips are counted the same regardless which room they originate from, so a trip from A to B is equivalent to a trip from B to A). Given those inputs, determine a layout of departments into rooms which minimizes travel time. What is the best way to approach this problem algorithmically? Is there already a particular algorithm or class of algorithms designed to solve this type of problem? Does this type of problem have a name in computer science? I am not looking for you to design an algorithm to solve this, although feel free to do so if you would like. I'm wondering if this is a problem space that has already been well defined and studied algorithmically and if so get some links to research further. I can see a lot of different data structures and algorithms that might apply to this and I'm curious which approach would be "best". And don't worry, you are not doing my homework for me. This is not a homework problem per se, as this is a business course and we were simply discussing the concepts and not trying to solve the problem algorithmically.

    Read the article

  • Segmentation fault 11 in MacOS X- C++ [migrated]

    - by Marcos Cesar Vargas Magana
    all. I have a "segmentation fault 11" error when I run the following code. The code actually compiles but I get the error at run time. //** Terror.h ** #include <iostream> #include <string> #include <map> using std::map; using std::pair; using std::string; template<typename Tsize> class Terror { public: //Inserts a message in the map. static Tsize insertMessage(const string& message) { mErrorMessages.insert( pair<Tsize, string>(mErrorMessages.size()+1, message) ); return mErrorMessages.size(); } private: static map<Tsize, string> mErrorMessages; } template<typename Tsize> map<Tsize,string> Terror<Tsize>::mErrorMessages; //** error.h ** #include <iostream> #include "Terror.h" typedef unsigned short errorType; typedef Terror<errorType> error; errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); //** main.cpp ** #include <iostream> #include "error.h" using namespace std; int main() { try { throw error(memoryAllocationError); } catch(error& err) { } } I have kind of debugging the code and the error happens when the message is being inserted in the static map member. An observation is that if I put the line: errorType memoryAllocationError=error::insertMessage("ERROR: out of memory."); inside the "main()" function instead of at global scope, then everything works fine. But I would like to extend the error messages at global scope, not at local scope. The map is defined static so that all instances of "error" share the same error codes and messages. Do you know how can I get this or something similar. Thank you very much.

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • What would a database look like if it were normalized to be completely abstracted? lets call it Max(n) normal form

    - by Doug Chamberlain
    edit: By simplest form i was not implying that it would be easy to understand. For instance, developing in low level assembly language is the simplest way to can develop code, but it is far from the easiest. Essentially, what I am asking is in math you can simplify a fraction to a point where it can no longer be simplfied. Can the same be true for a database and what would a database look like in its simplest, form?

    Read the article

  • Examples of limitations in IT due to different bit length by design

    - by Alaudo
    I am teaching the course "Introduction in Programming" for the first-year students and would like to find interesting examples where the datatype size in bits, chosen by design, led to certain known restrictions or important values. Here are some examples: Due to the fact that the Bell teleprinter used 7-bit-code (later accepted as ASCII) until now have we often to encode attachments in electronic messages to contain only 7 bit data. Classical limitation of 32-bit address space leads to the 4Gb maximal RAM size available for 32-bit systems and 4Gb maximal file size in FAT32. Do you know some other interesting examples how the choice of the data type (and especially its binary length) influenced the modern IT world. Added after some discussion in comments: I am not going to teach how to overcome limitations. I just want them to know that 1 byte can hold the values from -127..0..+127 o 0..255, 2 bytes cover the range 0..65535 etc by proving examples they know from other sources, like the above-mentioned base64 encoding etc. We are just learning the basic datatypes and I am trying to find a good reference for "how large" these types are.

    Read the article

  • Developing a php system that tracks other websites analytics

    - by CodeCrack
    I want to develop a PHP website feature where users sign up, get a javascript snippet code that display an image on their site, and let's me track the number of visitors, unique hits, clicks and average visitor duration on their page. Is that something that should be done with some open source analytic software such as http://piwik.org/ or it's pretty doable on your own? If I had to do it myself from scratch, I would use image/pixel as a way to track the visit, drop a cookie with javascript snippet to track uniques, track clicks based on image click and redirect, and not sure about the bounce rate. Any thoughts or opinions are welcome.

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >