Search Results

Search found 44 results on 2 pages for 'paddy'.

Page 1/2 | 1 2  | Next Page >

  • Why Directly Accesing property is not recommended in OOPs PHP?

    - by Parth
    If I have a class "person" with a property $name and its getter(get_name()) and setter(set_name()) methods, then after instantiating the objects and setting the property i.e. $paddy = new person(); $paddy->set_name("Padyster Dave"); echo "Paddy's full name: ".$paddy->name; //WHY THIS IS NOT RECOMMENDED... In the above code $paddy->name;WHY THIS IS NOT RECOMMENDED?

    Read the article

  • In SEO & SEM terms, use of a international domain vs a local domain

    - by Paddy
    In terms of SEO & SEM if I have a .com and a .co.uk. Would it be better to use the .com and park the .co.uk, If I am selling the product locally (in the uk) and later moving out into the international market? Will I struggle more to compete locally with regards to local searches and Google Adwords, if I make the .com as the primary domain? Does the parking of the .co.uk or the .com effect the relevance of a web domains search locally and internationally?

    Read the article

  • How to add information indicators to a Launcher icon from a script?

    - by Paddy Landau
    Some applications place informational text over their icons in the Launcher. For example, Thunderbird shows the unread message count, and Update Manager shows the number of updates available and a progress bar. The image shows these two examples: I have created some Bash scripts that use yad (a Zenity fork), which adds an icon to the Launcher while running. I would like to know how I can create my own information overlay within my script for those icons.

    Read the article

  • Unable to log into samba server

    - by Paddington
    I am unable to log into a samba server (running on fedora core 6) as it prompts for a username and password when I try to connect to the mapped drives from my windows 7 machine. I decided to reset the password using the command smbpassword paddy and when I list users using check the pdbedit -L -v I see that the password was updated at the time I made this change. However, I am still unable to log in. The log file in /var/log/samba/log.paddy shows: [2012/10/11 09:55:54.605923, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED [2012/10/11 09:55:54.606635, 1] smbd/service.c:678(make_connection_snum) create_connection_server_info failed: NT_STATUS_ACCESS_DENIED How can I resolve this so that I can log in?

    Read the article

  • Mysql out of disk space

    - by Paddy
    I have just finished developing a rails app which has a mysql db as a backend. The app is meant for high traffic and will store lots of information. I am planning to set up my own web server and host the site from it. If in future my disk space runs out i would want to expand by adding more space. But say if my mysql database is housed in my /disk0s1 and by adding a new drive i have more partitions (and hence more disk space), how then would i extend my database to store information on those partitions too, and at the same time prevent any information from being written on the original partition. Should i go for multiple databases? if so how? If i went for a hosting solution i wouldn't be bothering about this as i would just have to worry about making payments for the extra space :) I always wondered how space is added on-the-go by these webhosts. Is there any specific mysql configuration that i have to make?

    Read the article

  • OpenDNS IP Conflict

    - by Paddy
    I have downloaded the OpenDNS client and it was working great till today. Shows me this error: dynamic IP update failed with message !yours or "Your IP address is taken by another user." I am pretty sure i'll get away with the problem by just restarting my connection as i'll be assigned a new dynamic ip. But I wonder how this is happening in the first place. Shouldn't Dynamic IP be unique for everyone?

    Read the article

  • IP not Pinging!

    - by Paddy
    I have an apache server running on Mac. i had windows before on the same machine and i had apache running there too. i could access my public sites by just typing in my ip in the web browser. now its not working. its working with localhost though!!!

    Read the article

  • IP not Pinging!

    - by Paddy
    I have an apache server running on Mac. i had windows before on the same machine and i had apache running there too. i could access my public sites by just typing in my ip in the web browser. now its not working. its working with localhost though!!! weird!! :( tried a lot of stuff! just couldnt make it work! any help would be greatly appreciated!!

    Read the article

  • MSMQ Resilience

    - by Paddy Carroll
    I have a requirement for a resilient MSMQ setup on VMWare ESX5. I am aware that we cannot allow the queue storage to be shared as it must be installed on physical disk mount, e.g. it cant be an CIFS or DFS Share. The following constraints apply: We don't use windows clustering We dont't rely on hot standbys Is there a way I can replicate the queue storage to another platform so that it can assume MSMQ duties on failure of the primary platforms using any method including queue forwarding?

    Read the article

  • Can't ping self

    - by Paddy
    I have a wireless internet connection setup on my Mac. (v10.5.6) Am connected to the internet and everything is running smoothly. I recently discovered a quirky behaviour while setting up apache web server. When i typed in my dynamic ip (http://117.254.149.11/) in the webbrowser to visit my site pages it just timed out. In terminal i tried pinging localhost and it worked. $ ping localhost PING localhost (127.0.0.1): 56 data bytes 64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.063 ms 64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.044 ms But if i pinged my ip it would just time out. $ ping 117.254.149.11 PING 117.254.149.11 (117.254.149.11): 56 data bytes ^C --- 117.254.149.11 ping statistics --- 10 packets transmitted, 0 packets received, 100% packet loss Pinging any other site works though. I am completely stumped. Any help would be greatly appreciated.

    Read the article

  • Scripting an 'empty' password in /etc/shadow

    - by paddy
    I've written a script to add CVS and SVN users on a Linux server (Slackware 14.0). This script creates the user if necessary, and either copies the user's SSH key from an existing shell account or generates a new SSH key. Just to be clear, the accounts are specifically for SVN or CVS. So the entry in /home/${username}/.ssh/authorized_keys begins with (using CVS as an example): command="/usr/bin/cvs server",no-port-forwarding,no-agent-forwarding,no-X11-forwarding,no-pty ssh-rsa ....etc...etc...etc... Actual shell access will never be allowed for these users - they are purely there to provide access to our source repositories via SSH. My problem is that when I add a new user, they get an empty password in /etc/shadow by default. It looks like: paddycvs:!:15679:0:99999:7::: If I leave the shadow file as is (with the !), SSH authentication fails. To enable SSH, I must first run passwd for the new user and enter something. I have two issues with doing that. First, it requires user input which I can't allow in this script. Second, it potentially allows the user to login at the physical terminal (if they have physical access, which they might, and know the secret password -- okay, so that's unlikely). The way I normally prevent users from logging in is to set their shell to /bin/false, but if I do that then SSH doesn't work either! Does anyone have a suggestion for scripting this? Should I simply use sed or something and replace the relevant line in the shadow file with a preset encrypted secret password string? Or is there a better way? Cheers =)

    Read the article

  • trigger a DNS change in Active directory

    - by Paddy Carroll
    Can I get solar winds to change a DNS alias in Active directory based upon a specific set of events or conditions? I have a collection of applications that use hostnames in combination with database names in order to resolve database connections, problem is that they haven't considered how a failover would work in practice so I want the product to provoke a change in DNS to point the apps at the right place if we get into a failure situation. Can it be done?

    Read the article

  • Powershell, Task Scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL i have written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • powershell task scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • AutoCompleteExtender - authentication failure (forms authentication)

    - by Paddy
    I'm using the AutoCompleteExtender from the AJAX control toolkit on my aspx page - I have it wired up to a WCF service that is returning a string array and everything works happily. If I change my service definition to include a demand for the caller to be authenticated, like so: <OperationContract(), PrincipalPermission(SecurityAction.Demand, Authenticated:=True)> _ Public Function GetLookupValues(ByVal prefixText As String, ByVal count As Integer, ByVal contextKey As String) As String() Then the autocomplete extender stops working, and I get an authentication error in the service. The service is set up to use ASPNetCompatibility mode, and I was hoping that the extender would pass the authentication credentials for my logged in user - does anyone know how to make this work?

    Read the article

  • ASP.net MVC HttpException strange file not found

    - by Paddy
    I'm running asp.net MVC site on IIS6 - I've edited my routing to look like the following: routes.MapRoute( "Default", "{controller}.aspx/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); routes.MapRoute( "Root", "", new { controller = "Home", action = "Index", id = "" } ); So all my urls now contain .aspx (as per one of the solutions from Phil Haack). Now, I catch all unhandled exceptions using Elmah, and for almost every page request, I get the following error caught by Elmah, that I never see on the front end (everything works perfectly): System.Web.HttpException: The file '/VirtualDirectoryName/Home.aspx' does not exist. System.Web.HttpException: The file '/VirtualDirectoryName/Home.aspx' does not exist. at System.Web.UI.Util.CheckVirtualFileExists(VirtualPath virtualPath) at System.Web.Compilation.BuildManager.GetVPathBuildResultInternal(VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVPathBuildResultWithNoAssert(HttpContext context, VirtualPath virtualPath, Boolean noBuild, Boolean allowCrossApp, Boolean allowBuildInPrecompile) at System.Web.Compilation.BuildManager.GetVirtualPathObjectFactory(VirtualPath virtualPath, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.Compilation.BuildManager.CreateInstanceFromVirtualPath(VirtualPath virtualPath, Type requiredBaseType, HttpContext context, Boolean allowCrossApp, Boolean noAssert) at System.Web.UI.PageHandlerFactory.GetHandlerHelper(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.UI.PageHandlerFactory.System.Web.IHttpHandlerFactory2.GetHandler(HttpContext context, String requestType, VirtualPath virtualPath, String physicalPath) at System.Web.HttpApplication.MapHttpHandler(HttpContext context, String requestType, VirtualPath path, String pathTranslated, Boolean useAppConfig) at System.Web.HttpApplication.MapHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) There is a Home controller, and it should be found, but I'm not sure a) where this is being called from, and b) why I don't see this error on the front end. Any ideas? Edited with answer: I think the answer for this can be found in this question: http://stackoverflow.com/questions/34194/asp-net-mvc-on-iis6

    Read the article

  • Rails App hangs after few requests

    - by Paddy
    I have Bitnami Rails stack installed on my Mac. To better explain my problem i created a simple scaffold based rails app with mysql as the backend. I can get to perform simple POST and GET requests for a while and after a few requests the app just hangs indefinitely. No exception caught or anything worthwhile in the development log to report this strange behavior. This is the last bit from the development log before the app froze: Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:38:51) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.9ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 99ms (View: 88, DB: 4) | 200 OK [http://localhost/writedatas] [4;36;1mSQL (0.2ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#new (for 127.0.0.1 at 2010-03-30 20:38:52) [GET] [4;36;1mWritedata Columns (2.0ms) [0m [0;1mSHOW FIELDS FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/new Rendered writedatas/_form (5.9ms) Completed in 34ms (View: 25, DB: 2) | 200 OK [http://localhost/writedatas/new] [4;36;1mSQL (0.4ms) [0m [0;1mSET NAMES 'utf8' [0m [4;35;1mSQL (0.1ms) [0m [0mSET SQL_AUTO_IS_NULL=0 [0m Processing WritedatasController#index (for 127.0.0.1 at 2010-03-30 20:39:17) [GET] [4;36;1mWritedata Load (0.7ms) [0m [0;1mSELECT * FROM `writedatas` [0m Rendering template within layouts/application Rendering writedatas/index [4;35;1mWritedata Columns (2.6ms) [0m [0mSHOW FIELDS FROM `writedatas` [0m Completed in 101ms (View: 90, DB: 4) | 200 OK [http://localhost/writedatas] It just hung at this point. And after this happens i have to restart the server, for it to hang again after few requests. This is the weirdest problem i have faced and i am truly stumped.

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Authlogic, logout, credential capture and security

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before him had not logged out from google and say.. changed his password!! Should i be paranoid? Isn't this a major security lapse while implementing the openid spec? Probably today someone can give me a workaround for this issue and the question is solved for me. But what about the others who have implemented openid in their apps and not implemented a workaround?

    Read the article

  • Authlogic, logout and credential capture

    - by Paddy
    Ok this is something weird. I got authlogic-oid installed in my rails app today. Everything works perfectly fine but for one small nuisance. This is what i did: I first register with my google openid. Successful login, redirection and my email, along with my correct openid is stored in my database. I am happy that everything worked fine! Now when i logout, my rails app as usual destroys the session and redirects me back to my root url where i can login again. Now if i try to login it still remembers my last login id. Not a big issue as i can always "Sign in as a different user" but i am wondering if there is anyway to not only logout from my app but also logout from google. I noticed the same with stack overflow's openid authentication system. Why am i so bothered about this, you may ask. But is it not a bad idea if your web apps end user, who happens to be in a cyber cafe, thinks he has logged out from your app and hence from his google account only to realize later that his google account had got hacked by some unworthy loser who just happened to notice that the one before had not logged out from google and say.. changed his password!! Should i be paranoid?

    Read the article

  • Limiting landscape views in UITabBarController containing UINavigationController

    - by Spider-Paddy
    I have a tab bar application that contains navigation views in 2 of its tabs. I would like 1 view in the 1 navigation controller to allow landscape view but because of the nav bar in tab bar limitation I now have to allow landscape views for every single view in my app to make the tilt messages get passed to my app which I don't want. I thought perhaps, on the views which shouldn't go to landscape, that there might be ways to either: prevent the view change e.g. calling setOrientation:UIDeviceOrientationPortrait whenever the device goes landscape or giving the illusion that the view doesn't change e.g. presenting a modal portrait view over the rotated view Anybody have any ideas or experience that they care to share? What is the best approach here? (I don't want to now have to design a landscape view for every view just to so that I can display a portrait & landscape view for 1 view)

    Read the article

  • NSObject release destroys local copy of object's data

    - by Spider-Paddy
    I know this is something stupid on my part but I don't get what's happening. I create an object that fetches data & puts it into an array in a specific format, since it fetches asynchronously (has to download & parse data) I put a delegate method into the object that needs the data so that the data fetching object copies it's formatted array into an array in the calling object. The problem is that when the data fetching object is released, the copy it created in the caller is being erased, code is: In .h file @property (nonatomic, retain) NSArray *imagesDataSource; In .m file // Fetch item details ImagesParser *imagesParserObject = [[ImagesParser alloc] init:self]; [imagesParserObject getArticleImagesOfArticleId:(NSInteger)currentArticleId]; [imagesParserObject release] <-- problematic release // Called by parser when images parsing is finished -(void)imagesDataTransferComplete:(ImagesParser *)imagesParserObject { self.imagesDataSource = [ImagesParserObject.returnedArray copy]; // copy array to local variable // If there are more pics, they must be assembled in an array for possible UIImageView animation NSInteger picCount = [imagesDataSource count]; if(picCount > 1) // 1 image is assumed to be the pic already displayed { // Build image array NSMutableArray *tempPicArray = [[NSMutableArray alloc] init]; // Temp space to hold images while building for(int i = 0; i < picCount; i++) { // Get Nr from only article in detailDataSource & pic name (Small) from each item in imagesDataSource NSString *picAddress = [NSString stringWithFormat:@"http://some.url.com/shopdata/image/article/%@/%@", [[detailDataSource objectAtIndex:0] objectForKey:@"Nr"], [[imagesDataSource objectAtIndex:i] objectForKey:@"Small"]]; NSURL *picURL = [NSURL URLWithString:picAddress]; NSData *picData = [NSData dataWithContentsOfURL:picURL]; [tempPicArray addObject:[UIImage imageWithData:picData]]; } imagesArray = [tempPicArray copy]; // copy makes immutable copy of array [tempPicArray release]; currentPicIndex = 0; // Assume first pic is pic already being shown } else imagesArray = nil; // No need for a needless pic array // Remove please wait message [pleaseWaitViewControllerObject.view removeFromSuperview]; } I put in tons of NSLog lines to keep track of what was going on & self.imagesDataSource is populated with the returned array but when the parser object is released self.imagesDataSource becomes empty. I thought self.imagesDataSource = [ImagesParserObject.returnedArray copy]; is supposed to make an independant object, like as if it was alloc, init'ed, so that self.imagesDataSource is not just a pointer to the parser's array but is it's own array. So why does the release of the parser object clear the copy of the array. (I checked & double checked that it's not something overwriting self.imagesDataSource, commenting out [imagesParserObject release] consistently fixes the problem) Also, I have exactly the same problem with self.detailDataSource which is declared & populated in the exact same way as self.imagesDataSource I thought that once I call the parser I could release it because the caller no longer needs to refer to it, all further activity is carried out by the parser object through it's delegate method, what am I doing wrong?

    Read the article

  • Mysql out of disk space

    - by Paddy
    I have just finished developing a rails app which has a mysql db as a backend. The app is meant for high traffic and will store lots of information. I am planning to set up my own web server and host the site from it. If in future my disk space runs out i would want to expand by adding more space. But say if my mysql database is housed in my /disk0s1 and by adding a new drive i have more partitions (and hence more disk space), how then would i extend my database to store information on those partitions too, and at the same time prevent any information from being written on the original partition. Should i go for multiple databases? if so how? If i went for a hosting solution i wouldn't be bothering about this as i would just have to worry about making payments for the extra space :) I always wondered how space is added on-the-go by these webhosts. Is there any specific mysql configuration that i have to make?

    Read the article

  • NSArray safeguards

    - by Spider-Paddy
    If there is a chance that an NSArray is empty, is it better to check it and set equal to nil if it's empty when it is assigned or to rather do the check when it is used? e.g. NSArray *myArray; if ([anotherArray count] > 0) <-- Check when assigned myArray = [anotherArray copy]; else myArray = nil; something = [myArray objectAtIndex:x]; or NSArray *myArray; myArray = [anotherArray copy]; if ([myArray count] > 0) <-- Check when used something = [myArray objectAtIndex:x]; Which is better?

    Read the article

1 2  | Next Page >