Search Results

Search found 18800 results on 752 pages for 'sqlauthority website revi'.

Page 260/752 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • Do I need social networks to be an expert developer? [closed]

    - by Gerald Blizzy
    This question may sound odd, but do I need twitter, facebook and google+ if I am a web-developer? I see many expert developers nowadays using it in working order. It seems like it's harder to stay in touch with customers, co-workers and potential customers if you don' use social networks. Am I right? Reason why I ask is that I am totally not a facebook/twitter person, I find it boring and annoying. I understand that linkedin is usefull for career, but what about twitter and facebook? Are they needed for web-developer career? What I am trying to ask is if I only use linkedin, own portfolio website, google talks, gmail and something like github, would I actually miss anything professionally/job-wise? My thoughts are that I can just have my portfolio website where I list all my projects aswell as contacts page with my google talks/gmail account. It can suit both fulltime job, freelance and own projects. So this way email and google talks is just enough. Am I right or not? Thanks in advance!

    Read the article

  • Does Google sometime prevent new white hat sites from ranking at all in some verticals?

    - by JVerstry
    Assuming someone wants to implement a new viagra or akai berry e-commerce website. There is a lot of competition and this site does not really bring something new, other than a new online counter to buy products at a nice price. Assuming this site does not use any black hat techniques and that it stays with Google quality guidelines, and assuming it has no (or few) backlinks (from non-authoritative websites). Assuming this website's pages are indexed properly in Webmaster Tool, and that no penalties are reported. No site improvements are suggested. Google crawls the site daily as reported in GWT. No robots.txt configuration issues. Does Google sometime decide to no rank this site for any user query (for weeks), because of lack of original content? The reason I am asking this is that I am trying to understand the possible cause of a similar situation I am observing with two sites. If so, what is the way out to start ranking for these site? If not, does it mean the cause is elsewhere for sure? Any confirmed info to get out of the maze is welcome.

    Read the article

  • .htaccess https redirect best method

    - by Douglas Cottrell
    I have searched through all the redirects posted buy others and cant quite find the answer to my problem. I have a website with over 3000 pages and we are getting duplication issues within google. We want to keep everything in the parent directory to be http except our contact.php and login.php page. We then have 3 folders that must be secured. admin, clients, customers I have tried using the following code in seperate .htaccess files for each folder, but I keep getting a conflict when I try and I am still trying to find a good solution for the home directory. RewriteEngine On RewriteCond %{SERVER_PORT} 80 RewriteCond %{REQUEST_URI} admin RewriteRule ^(.*)$ https://www.website.com/$1 [R,L] Any help would be greatly appreciated.

    Read the article

  • Devoxx 2011 Started Today

    - by Yolande
    Devoxx 2011, organized by Java user group in Belgium, is the biggest Java conference in Europe. The first two University Days set the tone for the weeklong conference with its in-depth technical sessions lead by luminaries from the Java community and industry experts. Each day is a great mix of 3 hour sessions and hands-on labs, 30 minute Tools-in-Action sessions giving tips for faster and better application development and the traditional Birds-of-a-Feather sessions in the evening. Java sessions for today and tomorrow: - Next Gen Enterprise Apps - Bert Ertman and Paul Bakker talked about new Java EE 6 APIs that reduces the need for boilerplate code and configuration. - JavaFX 2.0 – A Java developer’s guide - Stephen Chin and Peter Pilgrim will give an overview of new version and how Java developers can take advantage of it - Java Rich Clients with JavaFX 2.0 - Richard Bair and Jasper Potts will get into JavaFX 2.0 APIs - Building an end-to-end application using Java EE 6 and NetBeans - Arun Gupta will showcase how to write Java EE 6 applications more effectively. - The OpenJDK Community BOF with Dalibor Topic Starting Tuesday, come by the Oracle booth to chat about technology, enter our raffle and have a beer every day at 18:45 The sessions will be available on Parleys website after the conference. In the meantime, you can learn a lot about those Java technologies on our website: - JavaFX 2.0 tutorials and documentation - OpenJDK - News from the GlassFish community - JavaEE 6 resources - JavaOne sessions

    Read the article

  • Creating a backup - Rsync - Connection refused (111)

    - by pablofiumara
    I am trying to create a backup of my website for free. I just want to have a backup of my website, including not only all files and the configuration but also the databases. I mean, a full backup. If it can be done automatically, it would be better. I feel there are better ways than using the cpanel to achieve that (actually, I believe sometimes web hosters does not have any cpanel). I read the following on how to do it: Automatically mirror the entire contents and configuration of your main server to a secondary backup server on a completely separate network in a different data centre. Use RSync, FXP, cPanel voodoo, or whatever method you wish to automate syncing. That is why I installed Rsync Daemon which is an alternative to SSH for remote backups. I configured it but the test went wrong. The terminal is showing me this: pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share [sudo] password for pablofiumara: rsync: failed to connect to pablofiumara.com (50.87.147.75): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] pablofiumara@pablofiumara-Lenovo-G470:~$ sudo rsync [email protected]::share failed to connect to 50.87.147.7 (50.87.147.7): Connection refused (111) rsync error: error in socket IO (code 10) at clientserver.c(122) [Receiver=3.0.9] What should I do? Is there a better or easier way to achieve what I wish (I mentioned this in the first paragraph)?

    Read the article

  • Blank pale blue screen with Live USB Kubuntu on AMD Sempron 2800+ processor

    - by WGCman
    I am trying to install Kubuntu onto a USB stick to use on my Acer Aspire 1362 laptop with an AMD Sempron 2800+ chip. Using Windows XP, I downloaded and saved to the laptop's hard drive: kubuntu-2.04.1-desktop-i386.iso from the GetKubuntu website and LinuxLive USB Creator 2.8.16.exe from the Linux live website I then installed the latter and ran it, installing the kubuntu onto the Memory stick. Leaving the Bios setup unchanged, the USB stick is ignored and Windows boots. If I change the Bios boot order so that the memory stick takes precedence, I see a dark blue screen announcing Kubukntu 12.04, and on selecting either “live Mode” or “Persistent mode”, messages flash by quickly, some of which appear to be error messages, including “trying to unpack rootfs image as initramfs”, “cannot allocate resource for mainboard”, “no plug and play device found”. Eventually I see a pale blue screen with four moving dots announcing Kubukntu 12.04, similar to the login screen of my Kubuntu desktop, but no invitation to log in or indeed any dialog. After several minutes, this changes to a black screen with more messages including “no caching mode present”, “ADDRCONF(NETDEV_UP): wlan0: link is not ready”, then degrades to a blank pale blue screen which can only be moved by switching the computer off. Finding no way to log the error messages passing by, I managed to photograph most of them, but know no way to attach the photo to this forum. As suggested by User 68186 (to whom thanks!), I have edited my original post to reflect the recent progress, so the following two comments are now superseded.

    Read the article

  • SQL SERVER – Caption the Cartoon Contest – Last 2 Days

    - by pinaldave
    Developer’s life is very interesting, we often want to start my day early at a job so we can go home early. However, the day never comes as the life of the developer is always about working late hours. If the developer goes to the office early – there are good chances that his co-workers will come late. Additionally, I am confident that there will be always something urgent for developers or DBA to solve right at the time they are ready to go home. This is the life of the developers!  Here is the interesting story of a DBA who was about to go to the home. He had to take his girlfriend to a movie and dinner in 30 minutes. However, his manager asks him to fix the performance related issues with their production server. In normal case, he had only two choices a) Job or b) Girlfriend. Well, our super hero DBA decided to use efficient tools and improve the performance of the production server in merely 30 minutes. When he was done, his manager was absolutely surprised by his efficiency and accuracy of the work. He asked him following question - Here is the contest – you need to guess what was the answer of our Super Hero DBA. If you guess the answer correct you may win Star Wars R2-D2 Inflatable Remote Controlled device. Additionally, if you Download DB Optimizer before Dec 8, 2012 – you will be eligible for USD 25 Amazon Gift Card (there are total 10 such awards). Please do not leave comments in this thread – to participate in the contest – please leave a comment here in the original contest page. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Using RAND() in User Defined Functions (UDF)

    - by pinaldave
    Here is the question I received in email. “Pinal, I am writing a function where we need to generate random password. While writing T-SQL I faced following issue. Everytime I tried to use RAND() function in my User Defined Function I am getting following error: Msg 443, Level 16, State 1, Procedure RandFn, Line 7 Invalid use of a side-effecting operator ‘rand’ within a function. Here is the simplified T-SQL code of the function which I am using: CREATE FUNCTION RandFn() RETURNS INT AS BEGIN DECLARE @rndValue INT SET @rndValue = RAND() RETURN @rndValue END GO I must use UDF so is there any workaround to use RAND function in UDF.” Here is the workaround how RAND() can be used in UDF. The scope of the blog post is not to discuss the advantages or disadvantages of the function or random function here but just to show how RAND() function can be used in UDF. RAND() function is directly not allowed to use in the UDF so we have to find alternate way to use the same function. This can be achieved by creating a VIEW which is using RAND() function and use the same VIEW in the UDF. Here is the step by step instructions. Create a VIEW using RAND function. CREATE VIEW rndView AS SELECT RAND() rndResult GO Create a UDF using the same VIEW. CREATE FUNCTION RandFn() RETURNS DECIMAL(18,18) AS BEGIN DECLARE @rndValue DECIMAL(18,18) SELECT @rndValue = rndResult FROM rndView RETURN @rndValue END GO Now execute the UDF and it will just work fine and return random result. SELECT dbo.RandFn() GO In T-SQL world, I have noticed that there are more than one solution to every problem. Is there any better solution to this question? Please post that question as a comment and I will include it with due credit. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: technology

    Read the article

  • Unix: Sync directory with FTP or SFTP directory

    - by Svish
    I have a website on my local computer running Mac OS X. I am wondering if there is any built-in command that I can run in the Terminal that will upload that website to my webserver either through FTP or, if possible, SFTP. Installing new commands through MacPorts is also a possibility. A big bonus would be that it only uploaded the files that needs to be updated and not everything else. It would also be nice if I can tell it to delete the files on the server that no longer exists locally once in a while. Any good tips?

    Read the article

  • Radeon 9200 Driver?

    - by usuhh86
    I've been using Linux for a while, but haven't really done more than installed programs. I have Ubuntu 12.04 with an ATI Radeon 9200 graphics card. Ubuntu didn't recognize it during the install. All I want is a driver. Preferably the proprietary driver, but I'll settle for an open-source one if I need to. I went to the AMD support website, and whenever I click on the download link for "ATI Proprietary Linux x86 Display Driver 8.28.8" I get redirected to the main AMD website. I tried this on Firefox, Chrome, even booted up my netbook and tried it on IE9. Does anybody have a download link? And if not, a link to download an open-source alternative? Any help will be greatly appreciated. I'm pretty much still a newbie at this. OpenGL vendor string: Tungsten Graphics, Inc. OpenGL renderer string: Mesa DRI R200 (RV280 5961) x86/MMX/SSE2 TCL DRI2 OpenGL version string: 1.3 Mesa 8.0.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: no GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: no Unity 3D supported: no

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". Please suggest?

    Read the article

  • Basic IIS7 permissions question

    - by Tom Gullen
    We have a website, with a file: www.example.com/apis/httpapi.asp This file is used by the site internally to make requests joining two systems on the website together (one is Classic ASP, the other ASP.net). However, we do not want the public to be able to access the file. In IIS7.5, is there a setting I can do to make this file internal only? I've tried rewriting the URL for it but this rewrite is also applied internally so the scripts stop working as they fetch the rewritten url. Thanks for any help!

    Read the article

  • Showing content from pages at different URL's (masking), possibly with .htaccess

    - by zigojacko
    If I have URL's like:- domain.com/category/widgets/filter/blue domain.com/category/widgets/filter/red And it is pretty difficult to reconstruct them to something like:- domain.com/category/blue-widgets domain.com/category/red-widgets Is there any way at all that I can use URL rewrites or anything else with .htaccess or on the server to display the URL's as the domain.com/category/blue-widgets on the domain.com/category/widgets/filter/blue page? I've looked into masking URL's but got nowhere and this has been something bugging me for almost 6 months now. Is there any way to achieve what I want to do? FYI: This is a Magento website and the above process, I am wanting to implement for potentially hundreds of URL's. Edit To respond to @kkugelmann's answer:- I couldn't get your proposed RewriteRule to make a difference at all in the .htaccess file so I started testing a few things in this .htaccess tester:- The proposed RewriteRule didn't work in this tester:- However, the following did:- But adding any of these RewriteRule's into the website's .htaccess file did not rewrite the URL at all... Edit2 By the way, if I add [R=301,L] to the end of the URL rewrite rule, it does actually then rewrite the rule, but of course 301 redirects it as well which is unwanted behaviour. Edit3 I found another question with the same issue... And an accepted answer that solved the problem which seemed to be something to do with using mod_proxy and the [P] tag on the rule (if I try this, the page 404's).

    Read the article

  • Why was my site rejected for Google Adsense?

    - by hyuun jjang
    I have a 3 year old blog and its got around 16 articles/tutorials about some programming problems and solutions. It's getting pretty much a lot of view lately so I decided to apply for a google adsense account. When I first applied via blogger, google replied with the following statement: Page Type: In order to participate in Google AdSense, publishers' websites and application information must satisfy the following guidelines: - Your website must be your own top-level domain (www.example.com and not www.example.com/mysite). - You must provide accurate personal information with your application that matches the information on your domain registration. - Your website must contain substantial, original content... So, as I understood it, I decide to buy a domain and point my blogger blog to that new naked domain. and here is the newly bought domain where all the contents of my old blog resides. http://icodeya.com/ I reapplied, hoping that this time, I will make the cut. But then I got this reply Further detail: Unable to review your site: While reviewing http://www.icodeya.com/, we found that your site was down or unavailable. We suggest you check whether there was a typo in the URL submitted. When your site is operational, you can resubmit your application with the correct site by following the directions below. I'm a bit disappointed. Maybe I did something wrong with DNS configuration or something. But you can clearly see that my site is fully functional. I heard that google sends robots to crawl on to the site etc. It's just sad because I invested on a domain name, and now I can't even find ways to earn from it. Any tips?

    Read the article

  • Why is Ubuntu offline (except torrents) while Windows is online?

    - by Fahim al Islam
    I am using a static wired connection. Everything was perfect. But suddenly from few hours back I can't access any website. Dropbox, Ubuntu One also can't connect. Ping request is also unsuccessful, but I can download through torrent. I am not trying torrent download and browsing at the same time. So, I think it's not an issue about torrent using all the bandwidth. One important point is that this connection works perfectly on Windows on this same PC (My PC is dual-boot). I have tried the way what izx has suggested (using "sudo sh -c 'echo nameserver 8.8.8.8 /etc/resolv.conf'"), but I'm facing the same problem again. Now I can't even ping 8.8.8.8 and google.com. Though I can ping 74.125.228.2 (which is Google IP address) I can't understand what's happening and why this is happening. I'm new in this website many rules and regulations is unknown to me. So, please don't be bothered for my mistakes. Looking forward for help from anyone. Thanks to all.

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Personalized Pricing

    - by David Dorf
    In past postings I've spent a fair amount of time talking about targeted promotions.  Using a complete view of the customer that includes purchase history, location history, and psychographics gleaned from social media, we can select the offer with the greatest chance of redemption.  This is done to influence shopping behavior, which might be introducing the consumer to a new product line, increasing their basket size, increasing frequency of purchases, etc. Safeway seems to be taking a slightly different approach with their personalized pricing.  In additional to offering electronic coupons and club card offers, they are also providing a personalized price for certain items based on purchase history.  So when Sally want to shop at Safeway, she first checks the "Just for U" website for three types of deals.  She starts by selecting manufacturer coupons to load into her loyalty card, then she checks the Club Card for offers like "buy one get one free." The third step is the interesting one.  Safeway will set a particular lower price for Sally good for 90 days on items she buys often.  Clearly this isn't enforcing a new behavior but rather instilling loyalty.  I would love to know exactly how they are determining the personalized price.  Of course bargain hunters can still stack the three offers so they can, for example, get their $4.99 Oatmeal for $0.72. I like this particular question and answer from their website's FAQ: My offers are not that great. Can I tell you what offers I need? That's a good idea. That functionality is not currently available, but we appreciate your input and are constantly improving our just for U program. Stay tuned for exciting enhancements! I suppose if Safeway is tracking all the purchases, they can easily determine whether the customer if profitable.  As long as the customer stays profitable, why not let them determine a few offers themselves?  Food for thought.

    Read the article

  • Basic Google Analytics Click Tracking and/or Overview

    - by Alan Storm
    This is a really basic Google Analytics question. Apologies in advance if it's not appropriate here, but I've had a lot of luck on Stack Overflow and this seems like the best Stack Exchange site for a question like this. I'm trying to understand how Google Analytics goals work, or if they're the right feature to be using for my situation. Most of the documentation I find online refers to the old version of the UI, not the new one. I have a website, let's call is blog.example.com. This website drives traffic to an ecommerce store, let's call that store.example2.com. I want to get reports on which links from blog.example.com are being clicked through leading to store.example2.com. How do you do this in Google analytics? Are goals the right area to be looking? Do I setup the goals on store.example2.com or blog.example.com? Or both? Is there any canonical user guide (free or paid) that covers how this works? I'm a competent programmer, but it's years since I dealt with conversion tracking on any serious level, and we've progressed well beyond my frozen caveman pixel tracking knowledge. Thanks in advance

    Read the article

  • Why google return soft 404 when I redirect on signup page?

    - by Hettomei
    Since one month, I've got an increased "soft 404" reported by google webmaster tools but work well for users. I made some fix but can't figure out how to solve it. Configuration (maybe useless): I have a website built with rails 3.1 Authentication is handled by the gem Devise Problem: On this page http://en.bemyboat.com/yacht-charter/9965-sailboat-beneteau-oceanis-43 when you click on "Ask a Boat request" (a simple form, in GET to : http://en.bemyboat.com/boat_requests/new/9965) you are redirected with the http status 302 to sign in, and then sent back to the new page if successfully sign in. Google tells me that the link on "ask a boat request" returns a soft 404. I can't make this form in "POST" (which will solve the problem) because we need to automatically redirect user to the good page after sign in. (the Gem Devise memorize the "get" link) To simplify, the question is: how to protect a private page with authentication, reached with a simple "get" and not to be penalized by google with "soft 404". Thank you. PS : this website suffer a lot about english translation... please don't care.

    Read the article

  • SharePoint Designer prompts for credentials when edited from IE8

    - by Rob Nicholson
    Our intranet is hosted using the free SharePoint services on Windows 2003. Consider the following page: http://vserver003/help/technology/multimedia/multimedia.htm On selecting "Edit with Microsoft Office SharePoint Designer" from IE8, SPD launches, opens the website and then the selected page - all is well. In order to make moving the intranet easier, we've set-up a DNS setting called intranet.company.local so you can also access the intranet that way: http://intranet.company.local/help/technology/multimedia/multimedia.htm However, when you edit this page, SPD designer prompts you for credential, i.e. domain\username and password. If you enter the details it opens fine. If you don't enter the details, the page still opens but not the website. Any ideas have to get around this prompt? Haven't a clue where to start looking. Thanks, Rob. PS. The same prompt occurs if you use the physical IP address.

    Read the article

  • How can I receive more traffic? My VPS fails!!!

    - by qtrix
    I have a web site - photo gallery. About 400 photos. Site on Gallery 3. mySQL. Hosted on VPS from myhosting.com (CPU 1792 MHz, 2048 MB RAM). Everything seems to be ok, but there is one big problem. Once traffic reaches ~ 20 people (online) - website start loading really really slow. Actually website can't be loaded about 30-60 sec. What should I do? Buy more RAM / CPU on the same VPS? Move to a dedicated server or maybe myhosting.com just sucks? What do you recommend?

    Read the article

  • Windows XP Home Edition SP3 cant recognise PCMIA SD Card

    - by Pozo
    System Specifications: Laptop : Dell Inspiron 6000 OS: Windows Home Edition SP3 SD Adapter: Hagiwara Smart Media Adapter I inserted the card into the slot, windows xp recognises the device, lists the pcmia controller on the device manager list, an entry appears under the IDE ATA/ATAPI category on the device manager as well. However, the device does not show under my computer and the driver does not get assigned a letter number. I checked the system logs from the device manager and there were no logged errors. Checking the Hagiwara support website, the manufacturer indicates that the adapter driver is the same as the windows xp pcmia controller. Checking Dell's website, no specific drivers were listed for that either. General Search on the web indicates that multiple people face similar problems with their SD cards, yet none actually spell out the route issue that causes this. Please let me know if you have any suggestions for further debugging. Thanks in advance

    Read the article

  • How can I pass referrer header from my https domain to http domains?

    - by nutcracker
    My website is 100% https. I have links to other http domains. The referrer header is not set when linking from a https page to a http page. From http://en.wikipedia.org/wiki/HTTP_referrer If a website is accessed from a HTTP Secure (HTTPS) connection and a link points to anywhere except another secure location, then the referer field is not sent. I would prefer that other domains can see the referrer so that they know that traffic comes from my domain. Is there a way to force this header or is there another solution? Update I've done some basic testing using a redirect: http page -- link to http --> 301 redirect --> http page = referrer intact https page -- link to https --> 301 redirect --> http page = referrer blank https page -- link to http --> 301 redirect --> http page = referrer blank https page -- link to http --> 302 redirect --> http page = referrer blank The referrer is lost when linking from a https page to a http redirect page on my own domain. So there is no referrer on the redirect.

    Read the article

  • Controlling access to site folders if you cannot user Roles

    - by DavidMadden
    I find myself on an assignment where I could not use System.Web.Security.Roles.  That meant that I could not use Visual Studio's Website | ASP.NET Configuration.  I had to go about things another way.  The clues were in these two websites:http://www.csharpaspnetarticles.com/2009/02/formsauthentication-ticket-roles-aspnet.htmlhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxhttp://msdn.microsoft.com/en-us/library/b6x6shw7(v=VS.71).aspxYou can set in your web.config the restrictions on folders without having to set the restrictions in multiple folders through their own web.config file.  In my main default.aspx file in my protected subfolder off my main site, I did the following code due to MultiFormAuthentication (MFA) providing the security to this point:        string role = string.Empty;         if (((Login)Session["Login"]).UserLevelID > 3)         {             role = "PowerUser";         }         else         {             role = "Newbie";         }         FormsAuthenticationTicket ticket =  new FormsAuthenticationTicket( 1,                 ((Login)Session["Login"]).UserID,                 DateTime.Now,                 DateTime.Now.AddMinutes(20),                 false,                 role,                 FormsAuthentication.FormsCookiePath);         string hashCookies = FormsAuthentication.Encrypt(ticket);         HttpCookie cookie =  new HttpCookie(FormsAuthentication.FormsCookieName, hashCookies);         Response.Cookies.Add(cookie); This all gave me the ability to change restrictions on folders without having to restart the website or having to do any hard coding.

    Read the article

  • Title of the page in search results and title of google's cached version are different. Why?

    - by Azmorf
    Check this: http://www.google.com/search?q=site:gunlawsbystate.com+kansas+gun+laws The title of the first result is "Kansas Gun Laws - Gun Laws By State". Although, on the page google has cached the title is different: <title>Kansas Gun Laws - Kansas Gun Law - Reciprocity Guide</title> Google shows the title that has been on the site 2-3 months ago. Google bot has visited the website a lot of times since that, and as you see it even cached it (the latest version is of 15th Sept), however for some reason it doesn't change the title to the new one in the search results. We use hash-bang URL structure on this website. It completely meets google's requirements for AJAX websites (_escaped_fragment_ stuff). The issue I explained is happening with almost all hash-bang pages that got indexed. Questions: Why does it keep old page title in the search results? Can it be connected to the fact that I'm using hash-bang URLs? There are lots of pages on the site that have the same issue, all of them have hash-bang URLs. Another thing I noticed is that Google's "Preview" feature doesn't work for any hash-bang URLs on the site. Did I do anything wrong? It has got cached versions of the pages, why wouldn't it generate a preview? Thanks (and sorry for my English) PS. Here's a weird thing I also noticed: this search query https://www.google.com/search?q=Kansas+Gun+Laws+-+Reciprocity+Guide shows the correct title for the same page as in the example above. Why does google show different titles for the same page when you run different queries?

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >