Search Results

Search found 8020 results on 321 pages for 'webcenter sites'.

Page 290/321 | < Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >

  • redirecting the root domain - SEO and other issues, need some guidance!

    - by Jim Sp
    I'm not familiar with some of these forwarding methods and I need help. My issue is this: I have a site hosted on discountasp.net. My domain was registered through 1&1 and I redirected the DNS to what discountasp.net wanted. So when a user types www.mydomain.com, he/she sees the ASP.NET site hosted on discountasp.net, which is all fine My main page is Index.aspx, I really suck at html page design and I don't have time or the talent to fiddle with it (or money to get it done by a pro). The rest of the pages are fine. I want to use a good theme from tumblr or bloggr - one of the blog sites and create a page that I want to use as the first page - directly on blogger or tumblr - say yyy.blogspot.com (I have many reasons, so for now please don't bash my decision - let's just say that's what I want). That means when a user types www.mydomain.com, it should redirect it to the blogger or tumblr page. Everything else stays the sme - the links on the blogger page will say www.mydomain.com/xxxx and show up what's on the hosted website. I have setup the IIS rewrite rules etc. etc. so that all works just fine The bottom line is I want to show an external site's web page as my root page. I suppose I'm struggling to even explain what I want! I can of course do a response.redirect on the Index.aspx page - which is the simplest way to manage this, but the big question is will this hurt SEO in some way? If not, that would be what I do and leave the rest of the infrastructure intact (I have already done this to test and it works fine) Thank you very much j

    Read the article

  • Looping over commits for a file with jGit

    - by Andy Jarrett
    I've managed to get to grips with the basics of jGit file in terms of connecting to a repos and adding, commiting, and even looping of the commit messages for the files. File gitDir = new File("/Users/myname/Sites/helloworld/.git"); RepositoryBuilder builder = new RepositoryBuilder(); Repository repository; repository = builder.setGitDir(gitDir).readEnvironment() .findGitDir().build(); Git git = new Git(repository); RevWalk walk = new RevWalk(repository); RevCommit commit = null; // Add all files // AddCommand add = git.add(); // add.addFilepattern(".").call(); // Commit them // CommitCommand commit = git.commit(); // commit.setMessage("Commiting from java").call(); Iterable<RevCommit> logs = git.log().call(); Iterator<RevCommit> i = logs.iterator(); while (i.hasNext()) { commit = walk.parseCommit( i.next() ); System.out.println( commit.getFullMessage() ); } What I want to do next is be able to get all the commit message for a single file and then be able revert the single file back to a specific reference/point in time.

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • Are there any security issues to avoid when providing a either-email-or-username-can-act-as-username

    - by Tchalvak
    I am in the process of moving from a "username/password" system to one that uses email for login. I don't think that there's any horrible problem with allowing either email or username for login, and I remember seeing sites that I consider somewhat respectable doing it as well, but I'd like to be aware of any major security flaws that I may be introducing. More specifically, here is the pertinent function (the query_row function parameterizes the sql). function authenticate($p_user, $p_pass) { $user = (string)$p_user; $pass = (string)$p_pass; $returnValue = false; if ($user != '' && $pass != '') { // Allow login via username or email. $sql = "SELECT account_id, account_identity, uname, player_id FROM accounts join account_players on account_id=_account_id join players on player_id = _player_id WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) AND phash = crypt(:pass, phash)"; $returnValue = query_row($sql, array(':login'=>$user, ':pass'=>$pass)); } return $returnValue; } Notably, I have added the WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) ...etc section to allow graceful backwards compatibility for users who won't be used to using their email for the login procedure. I'm not completely sure that that OR is safe, though. Are there some ways that I should tighten the security of the php code above?

    Read the article

  • Is there a search engine that indexes source code of a web-page?

    - by Dexter
    I need to search the web for sites that are in our industry that use the same Adwords management company, to ensure that the said company is not violating our contract, as they have been accused of doing. They use a tracking code in the template of every page which has a certain domain in the URL, and I'm wondering if it's possible "Google" the source code using some bot that crawls the code rather than the content? For example, I bought an unlimited license for an image gallery, and I was asked to type the license number in a comment just before the script. I thought it was just so a human could look at the source and find out if someone paid, but it turned out that it was actually that they had a crawler looking for their source code and that comment. If it ran across the code on your site, it would look for the comment, and if it found one, it would check to see if it was an existing one. If not, it would first notify you of your noncompliance, and then notify the owner of the script. Edit: I'm looking to index HTML and JavaScript only, not the server-side languages or Java.

    Read the article

  • Java, Massive message processing with queue manager (trading)

    - by Ronny
    Hello, I would like to design a simple application (without j2ee and jms) that can process massive amount of messages (like in trading systems) I have created a service that can receive messages and place them in a queue to so that the system won't stuck when overloaded. Then I created a service (QueueService) that wraps the queue and has a pop method that pops out a message from the queue and if there is no messages returns null, this method is marked as "synchronized" for the next step. I have created a class that knows how process the message (MessageHandler) and another class that can "listen" for messages in a new thread (MessageListener). The thread has a "while(true)" and all the time tries to pop a message. If a message was returned, the thread calls the MessageHandler class and when it's done, he will ask for another message. Now, I have configured the application to open 10 MessageListener to allow multi message processing. I have now 10 threads that all time are in a loop. Is that a good design?? Can anyone reference me to some books or sites how to handle such scenario?? Thanks, Ronny

    Read the article

  • Object Design catalog and resources

    - by Tauren
    I'm looking for web sites, books, or other resources that provide a catalog of object designs used in common scenarios. I'm not looking for generic design patterns, but for samples of actual object designs that were used to solve real problems. For instance, I'm about to build an internal messaging system for a web application, similar to Facebook's messaging system. This system will allow administrators to send messages to all members, to selected groups of members, or to individuals. Members can send messages to other members or groups of members. Fairly common stuff and a feature that I'm sure thousands of web applications require. I know each situation is different and there are a million ways to design this solution. Although this scenario isn't really all that complex, I'm sure the basic design of the necessary objects and relationships for a system like this has already been done many times. It would be nice to review other similar designs before building my own. Is there a place where people can share their designs and others can browse/search through the catalog to review and provide feedback on them? StackOverflow could be used to a degree for this, but doesn't really provide a catalog of designs. Any other resources that would relate?

    Read the article

  • Android Signal 11 (SIGSEGV)

    - by Naturjoghurt
    I read many posts here and on other sites, but can not find the problem creating my Error: I use an AsyncTask because I want to easily manipulate the UI Thread before and after Execution. In doInBackground I create a ThreadPoolExecutor and execute Runnables. If I only execute 1 Runnable with the Executor, there is no Problem, but if I execute another Runnable I get following Error: 06-26 18:00:42.288: A/libc(25073): Fatal signal 11 (SIGSEGV) at 0x7f486162 (code=1), thread 25106 (pool-1-thread-2) 06-26 18:00:42.304: D/dalvikvm(25073): GC_CONCURRENT freed 119K, 2% free 8908K/9056K, paused 4ms+4ms, total 45ms 06-26 18:00:42.327: I/System.out(25073): In Check All with Prefix: a and Length: 4 06-26 18:00:42.390: I/DEBUG(126): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** 06-26 18:00:42.390: I/DEBUG(126): Build fingerprint: 'google/yakju/maguro:4.2.2/JDQ39/573038:user/release-keys' 06-26 18:00:42.390: I/DEBUG(126): Revision: '9' 06-26 18:00:42.390: I/DEBUG(126): pid: 25073, tid: 25106, name: pool-1-thread-2 >>> de.uni_duesseldorf.cn.distributed_computing2 <<< 06-26 18:00:42.390: I/DEBUG(126): signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 7f486162 ... 06-26 18:00:42.538: I/DEBUG(126): memory map around fault addr 7f486162: 06-26 18:00:42.538: I/DEBUG(126): 60292000-60391000 06-26 18:00:42.538: I/DEBUG(126): (no map for address) 06-26 18:00:42.538: I/DEBUG(126): bed14000-bed35000 [stack] I set up the ThreadPoolExecutor like this: // numberOfPackages: Number of Runnables to be executed public void initializeThreadPoolExecutor (int numberOfPackages) { int corePoolSize = Runtime.getRuntime().availableProcessors(); int maxPoolSize = numberOfPackages; long keepAliveTime = 60; final BlockingQueue workingQueue = new LinkedBlockingQueue(); executor = new ThreadPoolExecutor(corePoolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, workingQueue); } I have no clue, why it fails when starting the second Thread. Maybe Memory Leaks? Any Help appreciated. Thanks in Advance

    Read the article

  • Publishing content to multiple (unknown) domains using Open Graph?

    - by Beau Lebens
    I'm working on an application that publishes content ('articles') on a variety of URLs, which are all controlled by the same WordPress installation (mapped domains, all powered by the same set of code/part of a network). All of the publishing is done through one central Facebook App. I have no idea what the domains for these URLS are going to be, since they are controlled by our users who register domains and then configure them within their account on our service. When I attempt to use Open Graph to publish content on one of these sites (that has a customized domain), they are rejected with the following error (error code 1611028): Object at URL * * * * of type 'article' is invalid because the domain '* * * ' is not allowed for the specified application id ' * * *'. You can verify your configured 'App Domain' at.... Since I can't enter all of the domains into Facebook, and since they are not derived from my App URL anyway, is there any way that I can have this work? Some sort of magic OG tag I can put in the pages or something? Or is it just not possible to do what I'm trying to do?

    Read the article

  • Good ways to earn income as a self employed developer

    - by nullptr
    I was just wondering if people could share their experiences and ideas about generating / earning income from a software product or service they have personally developed. To me this seems like a good way to earn a living while doing what we love (programming) and working on projects and problems which interest us. Ie, NOT boring bank or marketing software etc 9-5 all week... Some ideas I have are things like web 2.0 style sites (Facebook,Youtube,Twitter,Digg) etc etc... - These can be very very profitable as we all know but can take years to take off. Are there ways to survive until/if this does happen? Mobile applications. Iphone, Google Android and the new up coming Nintendo DS app store. These have good potential to make it easy to find a market for your application and make selling it easy. Shareware/PC software. A bit 80's and 90's and you kind of need to be a salesman/marketer to sell it but its the only other thing I can think of. Also im not talking about doing freelance work. Im only interested in idea's you can come up with and develop your self (not other peoples ideas or problems which are you are payed to develop). Things that a sole developer or at the most 2 developers could work on and have good potential for high returns on investment (in terms of time) would be great. PS, I wish I thought of stackoverflow!

    Read the article

  • Web-Application development startup advice

    - by rpr
    Dear programmers of StackOverflow, recently me and two of my friends from college started a software company. We are developing web-applications using GWT-Spring-Hibernate and other helper frameworks. In a couple of months, we managed to set up a stable back-end and produced some demo programs for CRM solutions. Our area of interest is CRM where we can combine the flexibility of our back-end with the slick looking GWT based GUIs. Unfortunately we live in a third world country (well, kind of two and a half :) and no one here gives our work enough credit, or really cares about the advantages of web-apps. We are stuck at the moment because our current clients do not want to pay that much money for just "putting their local app on the web". Since we can not find satisfying work here, we have decided to work online/international. I was wondering if you guys know a good freelance kind of sites where we can throw ourselves into the market.. Also a question from frustration, to those who are in the field or knowledgeable/interested about web based CRM, how much would you ask/pay for lets say a web app which will keep track of all the patients of a multi-branch clinic also allowing the patients to access to view their own records? Including tiered authorization, logging etc.. Many thanks in advance for your responses!

    Read the article

  • Curls and file_get_contents times out when loading a page

    - by Joseph
    Im trying to grab the content of this page(http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html) using curl or file_get_contents but it doesnt work, it loads when i just open it in the browser, but not otherwise. Here are my settings for CURL: curl_setopt($ch1, CURLOPT_INTERFACE, "$use_proxy"); curl_setopt($ch1, CURLOPT_URL, $url); curl_setopt($ch1, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch1, CURLOPT_REFERER, 'http://'.$domain); curl_setopt($ch1, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1"); curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, TRUE); echo curl_setopt($ch1, CURLOPT_HEADER, 1); curl_setopt($ch1, CURLOPT_VERBOSE, true); It works fine for other sites just not this one for some reason, any clue as to how to make it work ? Thanx. Heres the info from curl_getinfo($ch1): [url] => http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html [content_type] => [http_code] => 0 [header_size] => 0 [request_size] => 0 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 0 [namelookup_time] => 0.002578 [connect_time] => 0 [pretransfer_time] => 0 [size_upload] => 0 [size_download] => 0 [speed_download] => 0 [speed_upload] => 0 [download_content_length] => -1 [upload_content_length] => -1 [starttransfer_time] => 0 [redirect_time] => 0

    Read the article

  • jquery XML .html() instead of .text() is not displaying?

    - by Xtian
    I can't seem to figure out this problem. I am trying to get xml to render html tags. The problem I am having using .text() will display but not recognize any html tags. If I use .html() or just call var long2 = $(this).find('long'); nothing will show up in Safari or IE. I have xml paragraph I have text in here that needs bold tags or tags which is why i need html tags in the xml to be recognized. Code: $(document).ready(function(){ $.ajax({ type: "GET", url: "xml/sites.xml", dataType: "xml", success: function(xml) { $(xml).find('site').each(function(){ var id = $(this).attr('id'); var title = $(this).find('title').text(); var Class = $(this).find('class').text(); $('<div class="'+Class+'" id="link_'+id+'"></div>').html('<p class="title">'+title+'</p>').appendTo('#page-wrap'); $(this).find('desc').each(function(){ var url = $(this).find('url').text(); var long = $(this).find('long').text(); $('<div class="long"></div>').html(long).appendTo('#link_'+id); $('#link_'+id).append('<a href="http://'+url+'">'+url+'</a>'); var long2 = $(this).find('long'); $('<div class="long2"></div>').html(long2).appendTo('#link_'+id); }); }); } });

    Read the article

  • Get Highest Res Favicon

    - by Jeremy
    I'm making a website that needs to dynamically obtain the favicon of sites upon request. I've found a few api's that can accomplish this fairly well, and so far I'm liking http://www.fvicon.com/. The final image for my website will be 64x64px, and some websites such as Google and Wordpress have nice images of this size that are easily retrieved via this api. Though, of course, most websites only have a 16x16 favicon image and scaling that image to 64x64 has very bad quality loss. Examples: (high res) http://a.fvicon.com/wordpress.com?format=png&width=64&height=64 (low res) http://a.fvicon.com/yahoo.com?format=png&width=64&height=64 Keeping this in mind, I'm planning on somehow determining whether a high-res image is available and, if so, the website will use this image. If not, I want to use a pre-made 64x64 icon with the smaller icon layered over it. What I'm having trouble with is determining if there is a high res favicon available or not. Also, I'm curious if there's a better approach to this situation. I'd rather not use smaller images (64x64 works out really well for this project). The lowest res I'm willing to drop to is 48x48 but even then there will be a significant quality loss for scaling up 16x16 favicons. Any ideas? If you need any more information I will gladly provide it. Thank you!

    Read the article

  • How can I get HTML to link to a browser (or system) specified URL?

    - by MrHatken
    Hi All, I'd like to be able to create a "HTML link" that the user can click on and be taken to an URL (location) specified either in the browser (preferences?) or system environment. Is this possible? Any suggestions on how to do it please? For example, it may look something like this (or alternatively it could be a clickable image or even a submit button): "Click here to go to your preferred news site." When the user clicks on "here" the browser would go to a location specified not in the HTML but somehow in the browser (preferences?) or some system environment variable (OS specific etc.) Of course, the user would have to set up this preference or environment variable (or have some local application or better Web page that could set it - when approved by the user). This is sort of like most OS these days allow you to set "preferred app" for image processing or playing media. I would like to set preferred Web sites for certain tasks. Thanks for any suggestions. Hopefully with Javascript and modern browsers and perhaps HTML 5 something like this is possible. Cheers, Ashley.

    Read the article

  • user interface pattern for associating single or many objects to an entity

    - by Samuel
    Need suggestions on implementing associating single or many objects to an entity. All soccer team players are registered individually (e.g. they are part of 'players' table) A soccer team has many players. The click sequence is like this:- a] Soccer team owner provides a name and brief description of the soccer team. b] Now it wants to add players to this team. c] You have the following button 'Add players to team' which lets you navigate to the 'View Players' page and lets you multi select users from there. Assuming this is a paginated list of players, how do you handle the following:- Do you provide a check box against each player and let the manager do a multi selection. If you need to add more players, it doesn't make sense to show the players who have been already added to the team. Do you mark those entries as not selectable or you would adding showing these entries. If you need to filter, do you provide search filters at the top of this page. Am looking for ideas on how to implement this or sites which have already done something similar.

    Read the article

  • Threadpool design question

    - by ZeroVector
    I have a design question. I want some feedback to know if a ThreadPool is appropriate for the client program I am writing. I am having a client running as a service processing database records. Each of these records contains connection information to external FTP sites [basically it is a queue of files to transfer]. A lot of them are to the same host, just moving different files. Therefore, I am grouping them together by host. I want to be able to create a new thread per host. I really don't care when the transfers finish, they just need to do all the work (or try to do) they were assigned, and then terminate once they are finished, cleaning up all resources they used in the process. I anticipate no more than 10-25 connections to be established. Once the transfer queue is empty, the program will simply wait until there are records in the queue again. Is the ThreadPool a good candidate for this or should I use a different approach? Edit: For the most part, this is the only significant custom application running on the server.

    Read the article

  • How to manage feeds with subclassed object in Django 1.2?

    - by Matteo
    Hi, I'm trying to generate a feed rss from a model like this one, selecting all the Entry objects: from django.db import models from django.contrib.sites.models import Site from django.contrib.auth.models import User from imagekit.models import ImageModel import datetime class Entry(ImageModel): date_pub = models.DateTimeField(default=datetime.datetime.now) author = models.ForeignKey(User) via = models.URLField(blank=True) comments_allowed = models.BooleanField(default=True) icon = models.ImageField(upload_to='icon/',blank=True) class IKOptions: spec_module = 'journal.icon_specs' cache_dir = 'icon/resized' image_field = 'icon' class Post(Entry): title = models.CharField(max_length=200) description = models.TextField() slug = models.SlugField(unique=True) def __unicode__(self): return self.title class Photo(Entry): alt = models.CharField(max_length=200) description = models.TextField(blank=True) original = models.ImageField(upload_to='photo/') class IKOptions: spec_module = 'journal.photo_specs' cache_dir = 'photo/resized' image_field = 'original' def __unicode__(self): return self.alt class Quote(Entry): blockquote = models.TextField() cite = models.TextField(blank=True) def __unicode__(self): return self.blockquote When I use the render_to_response in my views I simply call: def get_journal_entries(request): entries = Entry.objects.all().order_by('-date_pub') return render_to_response('journal/entries.html', {'entries':entries}) And then I use a conditional template to render the right snippets of html: {% extends "base.html" %} {% block main %} <hr> {% for entry in entries %} {% if entry.post %}[...]{% endif %}[...] But I cannot do the same with the Feed Framework in django 1.2... Any suggestion, please?

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • Advanced XPath query

    - by alex
    Hello, I have an XML file that looks like this: <?xml version="1.0" encoding="utf-8" ?> <PrivateSchool> <Teacher id="teacher1"> <Name> teacher1Name </Name> </Teacher> <Teacher id="teacher2"> <Name> teacher2Name </Name> </Teacher> <Student id="student1"> <Name> student1Name </Name> </Student> <Student id="student2"> <Name> student2Name </Name> </Student> <Lesson student="student1" teacher="teacher1" /> <Lesson student="student2" teacher="teacher2" /> <Lesson student="student3" teacher="teacher3" /> <Lesson student="student1" teacher="teacher2" /> <Lesson student="student3" teacher="teacher3" /> <Lesson student="student1" teacher="teacher1" /> <Lesson student="student2" teacher="teacher4" /> <Lesson student="student1" teacher="teacher1" /> </PrivateSchool> There's also a DTD associated with this XML, but I assume it's not much relevant to my question. Let's assume all needed teachers and students are well defined. What is the XPath query that returns the teachers' NAMES, that have at least one student that took more than 10 lessons with them? I was looking at many XPath sites/examples. Nothing seemed advanced enough for this kind of question... Thank you

    Read the article

  • Should I learn to code?

    - by saltcod
    Hi All, This is more of a philosophical question than a technical one, but I’d like some opinions on it, and I think that there are many others in my position that would benefit. My issue is that I don’t really have time to learn how to code. I know, I know… no one has time anymore, but please hear me out. Since learning to use Drupal about 2 years ago I’ve been involved with several projects wherein I’ve become the default quasi-developer, front-end designer, site manager, and system administrator. What I’ve found is that I can produce fairly nice, feature rich Drupal sites with the wealth of contrib. modules out there (Views, CCK, image handling, etc….). BUT! I can’t code. I know enough PHP to insert something into a block, or re-word a string, but that’s about it. I still don’t really even know how arrays work. My question Succinctly, my question is: Given the time that I have available for all of this stuff – in addition to a full-time job and regular life – am I better off trying to become more expert at the front-end stuff, or should I just learn PHP already? Pros 1. If a project doesn’t use Drupal, I’ll know enough PHP to be able to participate. 2. Learning PHP would help my Drupal development too 3. Learning PHP would make front-end theming easier 4. Learning PHP should give me that missing background in programming – and should allow me to learn other languages in the future Cons 1. At 28, I know I’m not too old to learn anything. But am I too old to become ‘good’? 2. Am I better off getting better and better at front-end UX work? 3. Am I better off farming out the PHP work? Suggestions from coders welcome! Thanks Terry

    Read the article

  • IE Information Bar, download file...how do I code for this?

    - by flatline
    I have a web page (asp.net) that compiles a package then redirects the user to the download file via javascript (window.location = ....). This is accompanied by a hard link on the page in case the redirect doesn't work - emulating the download process on many popular sites. When the IE information bar appears at the top due to restricted security settings, and a user clicks on it to download the file, it redirects the user to the page, not the download file, which refreshes the page and removes the hard link. What is the information bar doing here? Shouldn't it send the user to the location of the redirect? Am I setting something wrong in the headers of the download response, or doing something else wrong to send the file in the first place? C# Code: m_context.Response.Buffer = false; m_context.Response.ContentType = "application/zip"; m_context.Response.AddHeader("Content-Length", fs.Length.ToString()); m_context.Response.AddHeader("Content-Disposition", string.Format("attachment; filename={0}_{1}.zip", downloadPrefix, DateTime.Now.ToString("yyyy-MM-dd_HH-mm"))); //send the file

    Read the article

  • Read a text file

    - by Cyprus106
    I have looked everywhere and surprisingly can't find a good solution to this! I've got the following code that is supposed to read a text file and display it's contents. But it's not reading, for some reason. Am I doing something wrong? FTR, I can't use PHP for this. It's gotta be Javascript. var txtFile = new XMLHttpRequest(); txtFile.open("GET", "http://www.mysite.com/todaysTrivia.txt", true); txtFile.send(null); txtFile.onreadystatechange = function() { if (txtFile.readyState == 4) { // Makes sure the document is ready to parse. alert(txtFile.responseText+" - "+txtFile.status); //if (txtFile.status === 200) { // Makes sure it's found the file. var doc = document.getElementById("Trivia-Widget"); if (doc) { doc.innerHTML = txtFile.responseText ; } //} } txtFile.send(null); } Any good ideas what I'm doing wrong? It just keeps givimg me a zero status. EDIT: I guess it would be a good idea to explain why I need this code. It's basically a widget that other folks can put on their own websites that grabs a line of text from my website and displays it on theirs. The problem is that it really can't be server-side since I've got zero control over everyone else's sites that use this.

    Read the article

  • Pass variable to Info Window in FusionTableLayer

    - by user1030205
    I am building a web application that includes a Google Map layered with data from a Google Fusion Table. I have defined the info window for the markers in the Fusion Table and all is rendering as expected, but I have one issue. I need to pass a session variable from my web application to be included in the links that are defined in the info window, but can't seem to find a way to do this. Below is the javascript I am currently using to render the map: var myOptions = { zoom: 10, mapTypeId: google.maps.MapTypeId.ROADMAP, center: new google.maps.LatLng( 40.4230,-98.7372) } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); // Weather weatherLayer = new google.maps.weather.WeatherLayer({ temperatureUnits: google.maps.weather.TemperatureUnit.FAHRENHEIT }); weatherLayer.setMap(map); //Hobby Stores var storeLayer = new google.maps.FusionTablesLayer({ query: { select: "col2", from: "3991553" }, map: map, supressInfoWindows: true }); //Club Sites var siteLayer = new google.maps.FusionTablesLayer({ query: { select: "col13", from: "3855088" }, styles: [{ markerOptions: { iconName: "airports" }}], map: map, supressInfoWindows: true }); I'd like to be able to pass some type of parameter in the call to google.maps.FusionTableLayer that passes a value to be include in the info window, but can't find a way to do this. To view the actual page, visit www.dualrates.com. Enter your zipcode and select one of the airport markers to see the info window. You may have to zoom the map out to see an airfield.

    Read the article

  • How to make CSS URL background images show up in localhost?

    - by Joel
    Hi guys, I just installed xampp and I brought one of my live sites into it to be able to start working from localhost. So to view my site, I navigate to localhost/example.com I noticed some issues with images when on my html, I had for example: <img src="/new_pictures/05.jpg" alt="Central Market"/> Image wouldn't show up, but then I removed the / and this works: <img src="new_pictures/05.jpg" alt="Central Market"/> I seem to have a similar problem with the CSS background image, but I can't get it to work-if I remove the / there, the image doesn't show up on the live site. How do I make the background image show up on localhost? Example CSS (works in live site but not localhost): .outeremailcontainer { height:60px; width: 275px; background-image:url(/images/feather_email2.jpg); text-align:center; float:right; position:relative; z-index:1; } Thanks!

    Read the article

< Previous Page | 286 287 288 289 290 291 292 293 294 295 296 297  | Next Page >