Search Results

Search found 9 results on 1 pages for 'vytautas butkus'.

Page 1/1 | 1 

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • How can a collection class instantiate many objects with one database call?

    - by Buttle Butkus
    I have a baseClass where I do not want public setters. I have a load($id) method that will retrieve the data for that object from the db. I have been using static class methods like getBy($property,$values) to return multiple class objects using a single database call. But some people say that static methods are not OOP. So now I'm trying to create a baseClassCollection that can do the same thing. But it can't, because it cannot access protected setters. I don't want everyone to be able to set the object's data. But it seems that it is an all-or-nothing proposition. I cannot give just the collection class access to the setters. I've seen a solution using debug_backtrace() but that seems inelegant. I'm moving toward just making the setters public. Are there any other solutions? Or should I even be looking for other solutions?

    Read the article

  • Do objects maintain identity under all non-cloning conditions in PHP?

    - by Buttle Butkus
    PHP 5.5 I'm doing a bunch of passing around of objects with the assumption that they will all maintain their identities - that any changes made to their states from inside other objects' methods will continue to hold true afterwards. Am I assuming correctly? I will give my basic structure here. class builder { protected $foo_ids = array(); // set in construct protected $foo_collection; protected $bar_ids = array(); // set in construct protected $bar_collection; protected function initFoos() { $this->foo_collection = new FooCollection(); foreach($this->food_ids as $id) { $this->foo_collection->addFoo(new foo($id)); } } protected function initBars() { // same idea as initFoos } protected function wireFoosAndBars(fooCollection $foos, barCollection $bars) { // arguments are passed in using $this->foo_collection and $this->bar_collection foreach($foos as $foo_obj) { // (foo_collection implements IteratorAggregate) $bar_ids = $foo_obj->getAssociatedBarIds(); if(!empty($bar_ids) ) { $bar_collection = new barCollection(); // sub-collection to be a component of each foo foreach($bar_ids as $bar_id) { $bar_collection->addBar(new bar($bar_id)); } $foo_obj->addBarCollection($bar_collection); // now each foo_obj has a collection of bar objects, each of which is also in the main collection. Are they the same objects? } } } } What has me worried is that foreach supposedly works on a copy of its arrays. I want all the $foo and $bar objects to maintain their identities no matter which $collection object they become of a part of. Does that make sense?

    Read the article

  • Custom resolution - Windows 7 - no drivers

    - by Vytautas Butkus
    I'm having some problems with my VGA card. When I enable my VGA drivers and boot windows up they crash and I get message that the problem occured with atimdag.sys drivers or smth like that. I'm gettin BSOD all the time, unless I go to control panel and I disable drivers manually. I can live with disabled drivers just fine(I'm already used to it in quite some time) but the annoying problem is my display resolution. I can not set my res to anything higher than 1024x768. I was wondering if it's possible to change resolution to my native displays resolution without drivers? I've been trying to find something that would work w/o drivers, but with no luck(I've found many of solutions how to do it with working drivers, but as you know I can not enable my drivers). So please, maybe someone could help me and tell me how to set custom resolution?

    Read the article

  • PHP memory_limit local value does not match php.ini value

    - by Buttle Butkus
    CentOS system. Summary: changed memory_limit in master and local php.ini and yet no change in the local value for a particular virtual host. Trying to improve performance, I set the memory_limit to 1024M in /etc/php.ini phpinfo() shows Master and Local values for other virtual hosts on the server as 1024M. Changing the value in /etc/php.ini changes all values, except one. One site is stuck with a local value of 256M. I thought I found the problem: there is a php.ini file (which I didn't know about) in that site's root, and it had memory_limit = 256M I changed it to 1024M. Problem solved? No. And now I don't know where to look. Obviously, I've restarted apache (/etc/init.d/httpd restart), and that usually does the trick. I also turned off APC cache, though I don't think it would cache ini files. And finally, I tried adding this to the virtual host in httpd.conf: php_value memory_limit 536870912 (yes, that would be 5 GB) And that had no effect whatsoever. What else could be the problem? Thanks.

    Read the article

  • php.ini date.timezone usefulness?

    - by Buttle Butkus
    I'm not sure if this is a question for serverfault or stackoverflow but it seems like it has a lot to do with server config. We have a server in chicago and the server's clock is on chicago time. But since the business is located in California, it would seem to make sense to use pacific time. What happens when server time is Chicago, and php.ini directive date.timezone is set to "America/Los_Angeles"? How will that affect logs written to mysql, error logs, etc? I've looked at the Apache error log and, as I expected, the php directive does not affect it. Times are all servertime. Thanks.

    Read the article

  • How to make Ubuntu useradd behave like Centos useradd?

    - by Buttle Butkus
    I don't remember modifying CentOS useradd to get this behavior. useradd in CentOS creates the user's home directory with all the normal files (like .bashrc). I modified /etc/default/useradd to make it looks like CentOS (just required some uncommenting) except for Ubuntu having SHELL=/bin/sh instead of SHELL=/bin/bash How do I make useradd act like it does in CentOS? Is there some existing option to change? Or should I just add an alias to /etc/bash.bashrc? The difference: On Ubuntu, useradd is not creating the home directory. as root: $ useradd test $ cd ~test -su: cd: /home/test: No such file or directory

    Read the article

  • What's faster, cp -R or unpacking tar.gz files?

    - by Buttle Butkus
    I have some tar.gz files that total many gigabytes on a CentOS system. Most of the tar.gz files are actually pretty small, but the ones with images are large. One is 7.7G, another is about 4G, and a couple around 1G. I have unpacked the files once already and now I want a second copy of all those files. I assumed that copying the unpacked files would be faster than re-unpacking them. But I started running cp -R about 10 minutes ago and so far less than 500M is copied. I feel certain that the unpacking process was faster. Am I right? And if so, why? It doesn't seem to make sense that unpacking would be faster than simply duplicating existing structures.

    Read the article

  • How to automatically copy a file uploaded by a user by FTP in Linux (CentOS)?

    - by Buttle Butkus
    Outside contractor says they need read/write/execute permissions on part of the filesystem so they can run a script. I'm ok with that, but I want to know what they're running, in case it turns out there is some nefarious code. I assume they are going to upload the file, run it, and then delete it to prevent me from finding out what they've done. How can I find out exactly what they've done? My question specifically asks for a way of automatically copying the file, which would be one way. But if you have another solution, that's fine. For example, if the file could be automatically copied to /home/root/uploaded_files/ that would be awesome.

    Read the article

1