Search Results

Search found 7583 results on 304 pages for 'roger guess'.

Page 43/304 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Why does my cursor jump when typing in ubuntu 11.10

    - by Stephen Myall
    When typing in Ubuntu my cursor jumps around and its not application specific. It doesn't matter or Im filing in a web form, writing an e-mail or using LibreOffice or Lyx. Im using a Sony Vaio 64bit machine. i read a previous question (link below) on this subject which indicates it may have something to do with the touchpad settings. as this has occurred in previous Ubuntu distros Im guess it is somekind of hardware issue. How do you turn of the touchpad when typing to avoid the cursor jumping around? I'd be grateful if anyone can make this stop Stephen

    Read the article

  • Any thoughts on Squarespace as a blogging platform?

    - by Ethan
    I'd like to start a blog and I'm leaning towards a hosted, paid platform. I don't want to maintain a Web server. I also don't want the hosting company to put their own ads or branding on my site. Squarespace looks interesting, though kind of pricey. About the same price as TypePad I guess. (I might consider TypePad, but I personally find their UI difficult to use.) WordPress is cheaper but I think they're more known for their software than their hosting. Has anyone tried Squarespace? Are there other options I should consider? Thanks.

    Read the article

  • How do freelancers know how much their work is worth?

    - by Qmal
    I want to start a bit of freelancing in web development using ASP.NET MVC3 and PHP and I already have some people who are interested in hiring me, but I still can't figure out how much to charge for projects since I have never done it. For example how much would this site cost? Would it cost more if the author built it from scratch instead of using WordPress as the CMS? Or what about a simpler site like this? How much time spent is considered good/normal for building sites like these? And maybe some freelancers with experience can tell me what the usual requests are that they get from clients. What sites are the most in demand? I'm asking because I'm a student and I really can't work every day in a full-time job but I need the money so I guess a little freelancing would help me out.

    Read the article

  • .htaccess RedirectMatch 301 issue

    - by Steve
    Hi. I've moved my Wordpress installation from one domain to another, and I want to use an .htaccess file on the original to redirect visitors to the new page on the new website. The old site is http://www.steve.doig.com.au/wordpress/. The new site is http://www.superlogical.net I tried using tried using the following .htaccess file in the /wordpress directory: RedirectMatch 301 http://www.steve.doig.com.au/wordpress(.*) http://www.superlogical.net/$1 However, all this does is redirect visitors to the URL: http://www.superlogical.net/wordpress/ I guess this is working properly, but I don't have Wordpress installed in a /wordpress folder on the new domain. How do I remove this from the URL redirected to? Thanks..

    Read the article

  • Problem with Bash script: 'declare: not found'

    - by Ashfame
    I had a script which was running fine but when I ran it today, it says declare: not found. I am using bash shell and path at the starting of the script is correct. Two flagged lines in my script are as follows: declare -a RESPONSE RESPONSE=($RESULT) It also says ( is unexpected but I guess that is coming up because of the first error. Worth mentioning point is when I type in declare directly works fine. declare | grep USER shows USER=ashfame USERNAME=ashfame values="$SVN_BASH_USERNAME"; So, whats wrong here?

    Read the article

  • Is it true that first versions of C compilers ran for dozens of minutes and required swapping floppy disks between stages?

    - by sharptooth
    Inspired by this question. I heard that some very very early versions of C compilers for personal computers (I guess it's around 1980) resided on two or three floppy disks and so in order to compile a program one had to first insert the disk with "first pass", run the "first pass", then change to the disk with "second pass", run that, then do the same for the "third pass". Each pass ran for dozens of minutes so the developer lost lots of time in case of even a typo. How realistic is that claim? What were actual figures and details?

    Read the article

  • gcc no longer works after up grade to latest Ubuntu

    - by Hugh S. Myers
    As an example: hsmyers@ubuntu:~/c_dev$ cat hello.c #include <stdio.h> int main(int argc,char **argv) { printf("Hello World!\n"); return 0; } hsmyers@ubuntu:~/c_dev$ gcc -c -o hello.o hello.c In file included from /usr/include/stdio.h:28:0, from hello.c:1: /usr/include/features.h:323:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. At a guess somewhere along the way after trying to fix the error message: /usr/bin/ld: cannot find crt1.o: No such file or directory I've munged things up completely. Could anyone please advise? --hsm

    Read the article

  • How is time calculation performed by a computer?

    - by Jorge Mendoza
    I need to add a certain feature to a module in a given project regarding time calculation. For this specific case I'm using Java and reading through the documentation of the Date class I found out the time is calculated in milliseconds starting from January 1, 1970, 00:00:00 GMT. I think it's safe to assume there is a similar "starting date" in other languages so I guess the specific implementation in Java doesn't matter. How is the time calculation performed by the computer? How does it know exactly how many milliseconds have passed from that given "starting date and time" to the current date and time?

    Read the article

  • Ubuntu Software Center does not proceed from applying changes

    - by aneal
    I have a problem with Ubuntu software center. It is "Searching" and "applying changes" for long period of time. I tired to cancel by clicking cross(X) mark. However, it is now stuck at "cancelling". It won't let me download any new application even from terminal i guess. neal@neal-G50VT:~$ sudo apt-get install gnome-tweak-tool E: Could not get lock /var/lib/dpkg/lock - open (11: Resource temporarily unavailable) E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it? neal@neal-G50VT:~$ sudo dpkg --configure -a dpkg: error: dpkg status database is locked by another process There are similar question here, but with no answers: Software Center stuck for Dropbox Software Center freezes during “applying changes

    Read the article

  • Should my colleagues review each others code from source control system?

    - by Daniel Excinsky
    Hi everybody. So that's my story: one of my colleagues uses to review all the code, hosted to revision system. I'm not speaking about adequate review of changes in parts that he belongs to. He watches the code file to file, line to line. Every new file and every modified. I feel just like being spied on! My guess is that if code was already hosted to control system, you should trust it as workable at least. My question is, maybe I'm just too paranoiac and practice of reviewing each others code is good? P.S: We're team of only three developers, and I fear that if there will be more of us, colleague just won't have time to review all the the code we'll write.

    Read the article

  • 11.10 AMD64 alternate installer has broken packages?

    - by Ibrahim
    I'm installing Ubuntu 11.10 from the alternate install ISO because I need to use LVM. Unfortunately, at some point the installer fails because it can't install libpurple0 and ubuntu-desktop because they depend on libsasl2-modules but it's not installable somehow. It also has the same error for xserver-xorg-video-all but I think I could probably live without that one. Kind of annoying that this is broken, I'm guessing maybe if I had internet it would work but right now I'm on a campus network with a captive portal so I can't actually get a network connection without using a browser to log in. Just thought someone should know or maybe I'm doing something wrong. I'm going to try installing 11.04 alternate and then upgrading I guess.

    Read the article

  • Design patterns and multiple programming languages

    - by Eduard Florinescu
    I am referring here to the design patterns found in the GOF book. First, how I see it, there are a few peculiarities to design pattern and knowing multiple languages, for example in Java you really need a singleton but in Python you can do without it you write a module, I saw somewhere a wiki trying to write all GOF patterns for JavaScript and all the entries were empty, I guess because it might be a daunting task to do that adaptation. If there is someone who is using design patterns and is programming multiple languages supporting the OOP paradigm and can give me a hint on how should I approach design patterns. An approach that might help me in all languages I use(Java, JavaScript, Python, Ruby): Can I write good application without knowing exactly the GOF design patterns or I might need just some of them which might be crucial and if yes which one, are there alternatives to GOF for specific languages, and should a programmer or a team make their own design patterns set?

    Read the article

  • How to allow remote connections to Flask?

    - by Ilya Smagin
    Inside the system, running on virtual machine, I can access the running server at 127.0.0.1:5000. Although the 'remote' address of the vm is 192.168.56.101 (ping and ssh work fine), I cannot access the server with 192.168.50.101:5000 neither from the virtual machine nor from the local one. I guess there's something preventing remote connections. Here's /etc/network/interfaces: auto eth1 iface eth1 inet static address 192.168.56.101 netmask 255.255.255.0 ufw is inactive. How do I fix this problem?

    Read the article

  • What is the version numbering logic for open source developers managing software releases?

    - by Stephen Myall
    I guess this is more of a general question that I cant find the the answer to anywhere. What is the version numbering logic for open source developers managing software releases and is there any governance or guidance I can read up on. The origins of this question comes from me reviewing and researching software on countless websites that I would like to use on my Ubuntu OS. Through experience, I am learning some sites are much better than others explaining if a release is a stable, experimental or maintenance release but these explanations are not consistent with any version numbering logic I am familiar with.

    Read the article

  • Easy user management on html site?

    - by James Buldon
    I hope I'm not asking a question for which the answer is obvious...If I am, apologies. Within my html site (i.e. not Wordpress, Joomla, etc.) I want to be able to have a level of user management. That means that some pages I want to be only accessible to certain people with the correct username and password. What's the best way to do this? Are there any available scripts out there? I guess I'm looking for a free/open source version of something like this: http://www.webassist.com/php-scripts-and-solutions/user-registration/

    Read the article

  • Ray Tracing Shadows in deferred rendering

    - by Grieverheart
    Recently I have programmed a raytracer for fun and found it beutifully simple how shadows are created compared to a rasterizer. Now, I couldn't help but I think if it would be possible to implement somthing similar for ray tracing of shadows in a deferred renderer. The way I though this could work is after drawing to the gbuffer, in a separate pass and for each pixel to calculate rays to the lights and draw them as lines of unique color together with the geometry (with color 0). The lines will be cut-off if there is occlusion and this fact could be used in a fragment shader to calculate which rays are occluded. I guess there must be something I'm missing, for example I'm not sure how the fragment shader could save the occlusion results for each ray so that they are available for pixel at the ray's origin. Has this method been tried before, is it possible to implement it as I described and if yes what would be the drawbacks in performance of calculating shadows this way?

    Read the article

  • Will unity stop being a plugin for Compiz?

    - by Murphy1138
    I ask this as with the Unity Desktop running , when I try any games with my Ubuntu 12.04.1 I get so much frame rate drop with Unity and Compiz. If I switch to Gnome-Classic which uses mutter, I get a vast boost in performance. My system is an 8 core AMD with a Nvidia 460 SE that can play anything I chuck at it in Windows and I'm using the latest Nvidia drivers, but even simple games like the humble bundle gets serious lag with Unity and the only cause of this can be compiz (what I can guess). When Steam come to Ubuntu, how will this performance loss be addressed?

    Read the article

  • Oracle Exalogic Elastic Cloud - Planned Webcasts

    - by chuck.speaks
    I’m putting together a collection of recorded webcasts around Oracle Exalogic Elastic Cloud (Exalogic).  The plan is to do a systems overview and then multiple deep dives into hardware and software components that make up the engineered system. Those of you that are members of our partner community (Oracle Partner Network), drop me a note if you are interested in a full blown in-class delivery via PTS resources.  There is no schedule for these workshops but if there is enough interest, I would venture to guess it would roll out soon. Those of you with applications certified on Oracle WebLogic server that would like to scale to Exalogic, see me or watch this space.   Chuck Speaks chuck <dot> speaks at oracle <dot> com

    Read the article

  • Qt Certification Exams

    - by karlphillip
    I'm wondering about doing a Qt Certification Exam this year, but I'm not 100% sure the investment is worth. I'm considering it because I think it could be a nice + on my resume, and as you know, I'm all for improving my software engineer persona. As I already earn a BSc and MSc degrees in computer stuff, I guess I see the certification process as some kind of adventure. Anyway, I know I'll spend a lot of time preparing myself for the exam and I just wanted to know if a Qt certification is worth the effort. Apparently there are 2 certificates that you can get in the Qt world: Nokia Certified Qt Developer (basic) Nokia Certified Qt Specialist (advanced) Nowadays I build cross-platform software in C++ and this exam would fit beautifully in my resume. My main concern is that, given the obscure future of Qt, I might be throwing time and money out the window. I'm looking for some advice regarding the usefulness of such certifications.

    Read the article

  • Configuring mouse buttons to switch between apps?

    - by Matt Gregory
    I just installed 14.04, so I'm using the default setup (Unity, I guess). I have these two extra mouse buttons on the side of my mouse. Is there any way to map these so they can switch between open apps? What would be perfect is if clicking on button 6 (or whatever it is) would cycle forward through apps, button 7 would go backwards, and holding one of the buttons would show the task list and let you click on the app you want. That's really what I want.

    Read the article

  • Is there a variable width font that does not change width when adding effects like bold, italic?

    - by George Bailey
    NetBeans has a word wrap feature now - but if the font changes width when bold then it gets all jumpy and sometimes hard to work with. Edit: It turns out that even with Courier New that NetBeans word wrap still jumps up and down lines at a time at random. I guess that this question no longer cares for an answer. However,, it seems that there is no answer. (at least nobody has brought one up yet) I am currently using Comic Sans MS which gets wider when bold.

    Read the article

  • How to use MythBuntu to send TV signal to a 2nd frontend

    - by Mark Preston
    I guess the a MythTV or MythBuntu backend acts as a "server" for the frontends. I have MythBuntu installed. It runs fine, I can tune live TV, hear the sound, etc. To get this to work, I had to config the Wired Network IP4V settings to Method: Link-Local Only. The Local Backend IP address is: 127.0.0.1 and the info (bottom of screen) says that if there is another frontend, that this IP add. must be changed. 1 - Does this mean changed to the IP address of the 2nd frontend? 2 - What "Method" do I use to make 2 or more frontends? 3 - I have an ethernet switch which currently "sees" the tv signal, sends it to the computer's ethernet port where Mythbuntu makes use of it. 4 - How do I set up the Myth to send it's output (the tv shows) to both televisions? If you know of a How-To, or website, please give the URL or identifying keywords.

    Read the article

  • Why doesn't compiz show the outline when I use the grid?

    - by Roland Taylor
    When I drag windows, instead of getting an outline like I would on a clean install, I get nothing, so I don't know what function the plugin will use before releasing the mouse, other than what I guess it will do. Is there something known to cause this to happen, and what can I do to get back the outline? (NB: I have the outline enabled in the plugin settings, so please do not ask me to enable it :D (lol)!) EDIT: Now I have reinstalled the compiz plugins cleanly and still noting :(. What can I do?

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How to read BC4 texture in GLSL?

    - by Question
    I'm supposed to receive a texture in BC4 format. In OpenGL, i guess this format is called GL_COMPRESSED_RED_RGTC1. The texture is not really a "texture", more like a data to handle at fragment shader. Usually, to get colors from a texture within a fragment shader, i do : uniform sampler2D TextureUnit; void main() { vec4 TexColor = texture2D(TextureUnit, vec2(gl_TexCoord[0])); (...) the result of which is obviously a v4, for RGBA. But now, i'm supposed to receive a single float from the read. I'm struggling to understand how this is achieved. Should i still use a texture sampler, and expect the value to be in a specific position (for example, within TexColor.r ?), or should i use something else ?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >