Daily Archives

Articles indexed Saturday January 15 2011

Page 2/28 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How are python pages coded and what can the language be compared to? [closed]

    - by avon_verma
    I have a few questions about python I've seen many pages like these on Google http://mail.google.com/support/bin/answer.py?answer=6583 https://www.google.com/adsense/support/bin/topic.py?topic=13488 ...that have .py extensions. 1: Are pages like these built on pure python code, but printing out html like print "<div etc..." or like the typical asp,jsp,php type of pages with html pages and embedded python code like: <html> <% some python code %> </html> 2: What is python mainly used for making? windows apps or web or .. ? 3: Are ruby and perl also similar to python?

    Read the article

  • Is there a simpler way to redirect using a route while adding paramters in Kohana?

    - by Darryl Hein
    I find myself doing the following or similar quite often: Request::instance()->redirect(Route::get('route')->uri(array('action' => 'action'))); Or: Request::instance()->redirect(Route::get(Route::name(Request::instance()->route))->uri(array('action' => 'action'))); I'm wondering if there's any short, easier, simpler way of doing this. I love the Route functionality, but it makes for some long lines of PHP.

    Read the article

  • Creating Synch Point In TFS Source Tree Development Cycle

    - by Rob G
    Our development cycle rarely requires a branch so we have what tfs appears to consider a single, never-ending development cycle. Our problem is that each build includes an ever increasing long "Generating list of changesets and updating work items" step that includes all changesets/work items back to day 1. What is the proper step that we need to perform to formally lock and label (wrong terms I'm sure) the source tree so that a new cycle of changesets and work items can begin. Thanks!

    Read the article

  • Web.Config is Cached

    - by SGWellens
    There was a question from a student over on the Asp.Net forums about improving site performance. The concern was that every time an app setting was read from the Web.Config file, the disk would be accessed. With many app settings and many users, it was believed performance would suffer. Their intent was to create a class to hold all the settings, instantiate it and fill it from the Web.Config file on startup. Then, all the settings would be in RAM. I knew this was not correct and didn't want to just say so without any corroboration, so I did some searching. Surprisingly, this is a common misconception. I found other code postings that cached the app settings from Web.Config. Many people even thanked the posters for the code. In a later post, the student said their text book recommended caching the Web.Config file. OK, here's the deal. The Web.Config file is already cached. You do not need to re-cache it. From this article http://msdn.microsoft.com/en-us/library/aa478432.aspx It is important to realize that the entire <appSettings> section is read, parsed, and cached the first time we retrieve a setting value. From that point forward, all requests for setting values come from an in-memory cache, so access is quite fast and doesn't incur any subsequent overhead for accessing the file or parsing the XML. The reason the misconception is prevalent may be because it's hard to search for Web.Config and cache without getting a lot of hits on how to setup caching in the Web.Config file. So here's a string for search engines to index on: "Is the Web.Config file Cached?" A follow up question was, are the connection strings cached? Yes. http://msdn.microsoft.com/en-us/library/ms178683.aspx At run time, ASP.NET uses the Web.Config files to hierarchically compute a unique collection of configuration settings for each incoming URL request. These settings are calculated only once and then cached on the server. And, as everyone should know, if you modify the Web.Config file, the web application will restart. I hope this helps people to NOT write code! Steve WellensCodeProject

    Read the article

  • Google analytics and multiple independent subdomains

    - by MTilsted
    I need some help trying to setup google analytics correct. Here is my setup: We host sites for multiple customers, and each customer have their own subdomain on our site. So we have customerA.oursite.com and customerB.oursite.com As we add more customers we get more subdomains. We do want to track all data for each customer independent, but I don't want to to create a new google tracking code for each new customer. So my plan is to track all visits with "oursite.com", and then I will create a filter in google Analytics to get data for each specific customer(All visits for a specific subdomain). Is this(One tracking code, and a subdomain filter) the right way to do it? To create a subdomain filter i add a new profile for each customer, and then add a custom filter saying include "Request URI" and fill in "CustomerDomain.oursite.com". Is this the correct way to do it? And a general question about filters: Is it really impossible to create a new filter by applying it to data in an existing profile? I would really like to just collect all the data in one "main" profile and then create subdomain filters as we need them. But it seems that google only apply filters to new incomming data, not existing data. Is this really true? The following is my tracking code. Is '_setDomainName','none' the right thing to do? <script type="text/javascript"> /* Tracking code for qrtown.com */ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-11584298-10']); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • OpenBSD logins via SSH seem to be ignoring my configured radius server

    - by Steve Kemp
    I've installed and configured a radius server upon my localhost - it is delegating auth to a remote LDAP server. Initially things look good: I can test via the console: # export user=skemp # export pass=xxx # radtest $user $pass localhost 1812 $secret Sending Access-Request of id 185 to 127.0.0.1 port 1812 User-Name = "skemp" User-Password = "xxx" NAS-IP-Address = 192.168.1.168 NAS-Port = 1812 rad_recv: Access-Accept packet from host 127.0.0.1 port 1812, id=185, Similarly I can use the login tool to do the same thing: bash-4.0# /usr/libexec/auth/login_radius -d -s login $user radius Password: $pass authorize However remote logins via SSH are failing, and so are invokations of "login" started by root. Looking at /var/log/radiusd.log I see no actual log of success/failure which I do see when using either of the previous tools. Instead sshd is just logging: sshd[23938]: Failed publickey for skemp from 192.168.1.9 sshd[23938]: Failed keyboard-interactive for skemp from 192.168.1.9 port 36259 ssh2 sshd[23938]: Failed password for skemp from 192.168.1.9 port 36259 ssh2 In /etc/login.conf I have this: # Default allowed authentication styles auth-defaults:auth=radius: ... radius:\ :auth=radius:\ :radius-server=localhost:\ :radius-port=1812:\ :radius-timeout=1:\ :radius-retries=5:

    Read the article

  • Change ownership of directory and all contents to a new user from root.

    - by Andrew Fashion
    I created a website under /var/www/html/ all under root, all images, files, .htacess, directories, etc... I uploaded and configured everything as root. I want to make it it's own username/password so it's not owned by root. I currently do not have the user account made either, I want to also setup FTP for the user account. There is also about 30GB of images in the folder as well. How can I go about changing all of this? I am running CentOS 5.5 64 bit. Thank you!

    Read the article

  • Upgrading from MySQL Server to MariaDB

    - by Korrupzion
    I've heard that MariaDB has better performance than MySQL-Server. I'm running software that makes an intensive use of MySQL, thats why I want to try upgrading to MariaDB. Please tell me your experiences doing this conversion, and instructions or tips. Also, which files I should take care of for making a backup of MySQL-Server, so if something goes wrong with MariaDB, I could rollback to MySQL without issues? I would use this but i'm not sure if it's enough to get a full backup of MySQL-Server confs and databases mysqldump --all-databases backup /etc/mysql My Environment: uname -a (Debian Lenny) Linux charizard 2.6.26-2-amd64 #1 SMP Thu Sep 16 15:56:38 UTC 2010 x86_64 GNU/Linux MySQL Server Version: Server version 5.0.51a-24+lenny4 MySQL Client: 5.0.51a Statistics: Threads: 25 Questions: 14690861 Slow queries: 9 Opens: 21428 Flush tables: 1 Open tables: 128 Queries per second avg: 162.666 Uptime: 1 day 1 hour 5 min 13 sec Thanks! PS: Rate my english :D

    Read the article

  • Installing Ubuntu on an External Drive

    - by Dom
    I am trying to install ububtu on an external drive. I am a programmer who wants to start using Linux. I downloaded the usb installer from the ubuntu website and followed all the steps. But when I get to the part where I have to setup the partitioning, it says an error when moving forward "No root file system is defined". I've been doing some research and I think that I have to partition the external drive but do not know how to do so. The problem is that I only want 20gb used from that external drive and let the rest be used for storage. I am also a musician and use Pro tools so I would like to keep all my files there, but I dont want ubuntu on my main hardrive since the external one is portable. I'd appreciate it also if you could provide me the steps.

    Read the article

  • Is there a way to refresh Notepad?

    - by chama
    I'm not sure if this is the correct place to ask this, but I checked on google and wasn't able to find out for sure. Say, for example, that there's one process that's writing to a file. While the process is running, I opened the file in notepad. The process keeps writing to the file. Other than closing and reopening the file, is there any way for me to "refresh" the data that notepad is showing? TIA!

    Read the article

  • Tried to install Mint to a Flash Drive. Now I can't boot from the main hard disk.

    - by Dan
    Hello, all. I'm kind of new to Linux and I need some help. I wanted to install a Linux distro to a flash drive so that I can have a portable OS with all my settings, programs, etc. wherever I go. So I fired up a Linux Mint Live CD and installed Mint to the flash drive, and this seems to work OK. But now, whenever I try to boot up my system normally without the flash drive plugged in, it doesn't seem to work. It basically hangs for a bit, and then I get the following prompt: error: no such device: (some long hex val) grub rescue> However, when I try powering my system up when the USB is plugged into the computer, it gives me an option between using the OS installed on my USB and the OS installed on my HD. Selecting the latter, everything loads up just fine. I'm guessing that installing Mint to the flash drive somehow messed with my native Grub installation, but, again, I'm kind of new to Linux, so I'm not sure exactly why. Any help is greatly appreciated.

    Read the article

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • SQLAuthority News – Best Practices for Data Warehousing with SQL Server 2008 R2

    - by pinaldave
    An integral part of any BI system is the data warehouse—a central repository of data that is regularly refreshed from the source systems. The new data is transferred at regular intervals  by extract, transform, and load (ETL) processes. This whitepaper talks about what are best practices for Data Warehousing. This whitepaper discusses ETL, Analysis, Reporting as well relational database. The main focus of this whitepaper is on mainly ‘architecture’ and ‘performance’. Download Best Practices for Data Warehousing with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Data Warehousing, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is there an alternative to die?

    - by TJDeatwiler
    Sorry for the dramatic sounding title, just wanted to know if there is a way to prevent all types of PHP commands from executing EXCEPT one. For example, now when I kill a script using die() my pages look half broken because the bottom part of the page's html failed to load since it was being brought in using the include() function. So is there a way to tell PHP "don't allow any more commands to be executed except the include function" ?

    Read the article

  • Using json_encode on objects in PHP

    - by Alan
    Hi, I'm trying to output lists of objects as json and would like to know if there's a way to make objects usable to json_encode? The code I've got looks something like $related = $user->getRelatedUsers(); echo json_encode($related); Right now, I'm just iterating through the array of users and individually exporting them into arrays for json_encode to turn into usable json for me. I've already tried making the objects iterable, but json_encode just seems to skip them anyway.

    Read the article

  • Tracking Votes and only allowing 1 vote per member

    - by MikeAdams
    What I'm trying to do is count the votes when someone votes on a "page". I think I lost myself trying to figure out how to track when a member votes or not. I can't seem to get the code to tell when a member has voted. //Generate code ID $useXID = intval($_GET['id']); $useXrank = $_GET['rank']; //if($useXrank!=null && $useXID!=null) { $rankcheck = mysql_query('SELECT member_id,code_id FROM code_votes WHERE member_id="'.$_MEMBERINFO_ID.'" AND WHERE code_id="'.$useXID.'"'); if(!mysql_fetch_array($rankcheck) && $useXrank=="up"){ $rankset = mysql_query('SELECT * FROM code_votes WHERE member_id="'.$_MEMBERINFO_ID.'"'); $ranksetfetch = mysql_fetch_array($rankset); $rankit = htmlentities($ranksetfetch['ranking']); $rankit+="1"; mysql_query("INSERT INTO code_votes (member_id,code_id) VALUES ('$_MEMBERINFO_ID','$useXID')") or die(mysql_error()); mysql_query("UPDATE code SET ranking = '".$rankit."' WHERE ID = '".$useXID."'"); } elseif(!mysql_fetch_array($rankcheck) && $useXrank=="down"){ $rankset = mysql_query('SELECT * FROM code_votes WHERE member_id="'.$_MEMBERINFO_ID.'"'); $ranksetfetch = mysql_fetch_array($rankset); $rankit = htmlentities($ranksetfetch['ranking']); $rankit-="1"; mysql_query("INSERT INTO code_votes (member_id,code_id) VALUES ('$_MEMBERINFO_ID','$useXID')") or die(mysql_error()); mysql_query("UPDATE code SET ranking = '".$rankit."' WHERE ID = '".$useXID."'"); } // hide vote links since already voted elseif(mysql_fetch_array($rankcheck)){$voted="true";} //}

    Read the article

  • Copying a database into a new database including structure and data

    - by Jason
    In phpMyAdmin under operations I can "Copy database to:" and select Structure and data CREATE DATABASE before copying Add AUTO_INCREMENT value I need to be able to do that without using phpMyAdmin. I know how to create the database and user. I have a source database that's a shell that I can work from so all I really need is the how to copy all the table structure and data part. (I know, the harder part) system() & exec() are not options for me which rules out mysqldump. (I think) How can I loop through each table and recreate it's structure and data? Is it just looping through the results of SHOW TABLES then for each table looping through DESCRIBE tablename Then, is there an easy way for getting the data copied?

    Read the article

  • How do you set the default source for the Output window in Visual Studio?

    - by Grank
    We added a SharePoint BDC model project to a solution in Visual Studio 2010. Ever since, whenever the solution is built, instead of showing the Build output in the Output window, it insists on having "SharePoint Tools" selected in the "Show Output from:" drop-down, just to say "Model validation started ... Model validation completed with no errors." Short of shutting off any SharePoint projects in the build configuration, can this behavior be overridden?

    Read the article

  • Able to ping but cannot browse after several hours running of my python program

    - by Shane
    It's a GUI program I wrote in python checking website/server status running on my XP SP3, multi threads are used to check different site/server. After several hours running, the program starts to get urlopen error timed out all the time, and this always happens right after a POST request from a server(not a certain one, might be A or B or C), and it's also not the first POST request causing the problem, normally after several hours running and it happens to make a POST request at an unknown moment, all you get from then on is urlopen error timed out. I'm still able to ping but cannot browse any site, once the program closed everything's fine. It's definitely the program causing this problem, well I just don't know how to debug/check what the problem is, also don't know if it's from OS side or my program wasting too many resources/connections(are you still able to ping when too many connections used?), would anybody please help me out?

    Read the article

  • Observer pattern used with decorator pattern

    - by icelated
    I want to make a program that does an order entry system for beverages. ( i will probably do description, cost) I want to use the Decorator pattern and the observer pattern. I made a UML drawing and saved it as a pic for easy viewing. This site wont let me upload as a word doc so i have to upload a pic - i hope its easily viewable.... I need to know if i am doing the UML / design patterns correctly before moving on to the coding part. Beverage is my abstract component class. Espresso, houseblend, darkroast are my concrete subject classes.. I also have a condiment decorator class milk,mocha,soy,whip. would be my observer? because they would be interested in data changes to cost? Now, would the espresso,houseblend etc, be my SUBJECT and the condiments be my observer? My theory is that Cost is a changes and that the condiments need to know the changes? So, subject = esspresso,houseblend,darkroast,etc.. // they hold cost() Observer = milk,mocha,soy,whip? // they hold cost() would be the concrete components and the milk,mocha,soy,whip? would be the decorator! So, following good software engineering practices "design to an interface and not implementation" or "identify things that change from those that dont" would i need a costbehavior interface? If you look at the UML you will see where i am going with this and see if i am implementing observer + Decorator pattern correctly? I think the decorator is correct. since, the pic is not very viewable i will detail the classes here: Beverage class(register observer, remove observer, notify observer, description) these classes are the concrete beverage classes espresso, houseblend,darkroast, decaf(cost,getdescription,setcost,costchanged) interface observer class(update) // cost? interface costbehavior class(cost) // since this changes? condiment decorator class( getdescription) concrete classes that are linked to the 2 interface s and decorator are: milk,mocha,soy,whip(cost,getdescription,update) these are my decorator/ wrapper classes. Thank you.. Is there a way to make this picture bigger?

    Read the article

  • JSP displaying source code instead of executing

    - by DJStroky
    I'm new to jsp and have ran into some trouble. Initially, the jsp file and associated java classes were built and tested fine on a test Tomcat server. Now, they've been transitioned to another server of what I believe is the same setup (except it's linux now instead of windows). But when the jsp page is accessed the source code is displayed instead of the jsp actually executing. I've googled for a while but received no success. I had thought that this page might solve the problem since there was no reference to the jsp file I was using or even the following snippets in my web.xml file in the WEB-INF folder: <servlet> <servlet-name>jsp</servlet-name> <servlet-class>org.apache.jasper.servlet.JspServlet</servlet-class> <init-param> <param-name>logVerbosityLevel</param-name> <param-value>WARNING</param-value> </init-param> <load-on-startup>3</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jsp</servlet-name> <url-pattern>*.jsp</url-pattern> </servlet-mapping> I tried inserting these lines and restarting Tomcat, but no success. Any ideas?

    Read the article

  • How to build a RESTful API?

    - by Sharon Haim Pour
    Hi friends, The issue is this: I have a web application that runs on a PHP server. I'd like to build a REST api for it. I did some research and I figured out that REST api uses HTTP methods (GET, POST...) for certain URI's with an authentication key (not necessarily) and the information is presented back as a HTTP response with the info as XML or JSON (I'd rather JSON). My question is: 1. How do I, as the developer of the app, build those URI's? Do I need to write a PHP code at that URI? 2. How do I build the JSON objects to return as a response? I hope I was clear enough. Thanks!

    Read the article

  • Bash script — determine if file modified?

    - by Alan H.
    I have a Bash script that repeatedly copies files every 5 seconds. But this is a touch overkill as usually there is no change. I know about the Linux command watch but as this script will be used on OS X computers (which don’t have watch, and I don’t want to make everyone install macports) I need to be able to check if a file is modified or not with straight Bash code. Should I be checking the file modified time? How can I do that? Edit: I was hoping to expand my script to do more than just copy the file, if it detected a change. So is there a pure-bash way to do this?

    Read the article

  • Proper Usage of SqlConnection in .NET

    - by Jojo
    Hi guys, I just want an opinion on the proper usage or a proper design with regards to using SqlConnection object. Which of the 2 below is the best use: A data provider class whose methods (each of them) contain SqlConnection object (and disposed when done). Like: IList<Employee> GetAllEmployees() { using (SqlConnection connection = new SqlConnection(this.connectionString)) { // Code goes here... } } Employee GetEmployee(int id) { using (SqlConnection connection = new SqlConnection(this.connectionString)) { // Code goes here... } } or SqlConnection connection; // initialized in constructor IList<Employee> GetAllEmployees() { this.TryOpenConnection(); // tries to open member SqlConnection instance // Code goes here... this.CloseConnection(); // return } Employee GetEmployee(int id) { this.TryOpenConnection(); // tries to open member SqlConnection instance // Code goes here... this.CloseConnection(); // return } Or is there a better approach than this? I have a focused web crawler type of application and this application will crawl 50 or more websites simultaneously (multithreaded) with each website contained in a crawler object and each crawler object has an instance of a data provider class (above). Please advise. Thanks.

    Read the article

  • WP7 - Cancelling ContextMenu click event propagation

    - by Praetorian
    I'm having a problem when the Silverlight toolkit's ContextMenu is clicked while it is over a UIElement that has registered a Tap event GestureListener. The context menu click propagates to the underlying element and fires its tap event. For instance, say I have a ListBox and each ListBoxItem within it has registered both a ContextMenu and a Tap GestureListener. Assume that clicking context menu item2 is supposed to take you to Page1.xaml, while tapping on any of ListBox items themselves is supposed to take you to Page2.xaml. If I open the context menu on item1 in the ListBox, then context menu item2 is on top of ListBox item2. When I click on context menu item2 I get weird behavior where the app navigates to Page1.xaml and then immediately to Page2.xaml because the click event also triggered the Tap gesture for ListBox item2. I've verified in the debugger that it is always the context menu that receives the click event first. How do I cancel the context menu item click's routed event propagation so it doesn't reach ListBox item2? Thanks for your help!

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >