Search Results

Search found 18800 results on 752 pages for 'sqlauthority website revi'.

Page 130/752 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • please give guidence on making a social networking website

    - by user287745
    what is an api, if a make a db and a social site in asp.net and c#, sql host it. how can i ask others to make applications for it? i mean how is that possible, i have the code, so API, is it a software? i want to make a site like facebook social networking site, but its proving to be very very difficult, Ajax java script and all that please help by providing examples, codes, readable, guidance, thank you, i has asked a similar question but deleted it as it was not clear thank you

    Read the article

  • Multi-lingual website and webby

    - by ximus
    Hi, Anyone know how to best implement a multilingual static site using webby? I would put content for the multiple languages in content/{lang}/{page}.txt for starters, any ideas on the rest? I've never used webby. Thanks, Max.

    Read the article

  • Multilanguage website sitemap

    - by Alex
    My site is i18n based on the following structure: mydomain.com/en mydomain.com/en/product/blue-widgets mydomain.com/fr mydomain.com/fr/product/blue-widgets The site is internationalised not localised, therefore i don't want to GEO target specific locals just target "french" or "english" speaking users. When submitting a sitemap to the search engines should i send one sitemap with links to all the different language versions or have one separate sitemap for each language. Is that even possible?

    Read the article

  • IIS website is sending multiple content-type headers for zip files

    - by frankadelic
    We have a problem with an IIS5 server. When certain users/browsers click to download .zip files, binary gibberish text sometimes renders in the browser window. The desired behavior is for the file to either download or open with the associated zip application. Initially, we suspected that the wrong content-type header was set on the file. The IIS tech confirmed that .zip files were being served by IIS with the mime-type "application/x-zip-compressed". However, an inspection of the HTTP packets using Wireshark reveals that requests for zip files return two Content-Type headers. Content-Type: text/html; charset=UTF-8 Content-Type: application/x-zip-compressed Any idea why IIS is sending two content-type headers? This doesn't happen for regular HTML or images files. It does happen with ZIP and PDF. Is there a particular place we can ask the IIS tech to look? Or is there a configuration file we can examine?

    Read the article

  • Using Active Directory to authenticate users in a WWW facing website

    - by Basiclife
    Hi, I'm looking at starting a new web app which needs to be secure (if for no other reason than that we'll need PCI accreditation at some point). From previous experience working with PCI (on a domain), the preferred method is to use integrated windows authentication which is then passed all the way through the app to the database. This allows for better auditing as well as object-level permissions (ie an end user can't read the credit card table). There are advantages in that even if someone compromises the webserver, they won't be able to glean any additional information from the database. Also, the webserver isn't storing any database credentials (beyond perhaps a simple anonymous user with very few permissions) So, now I'm looking at the new web app which will be on the public internet. One suggestion is to have a Active Directory server and create windows accounts on the AD for each user of the site. These users will then be placed into the appropriate NT groups to decide which DB permissions they should have (and which pages they can access). ASP already provides the AD membership provider and role provider so this should be fairly simple to implement. There are a number of questions around this - Scalability, reliability, etc... and I was wondering if there is anyone out there with experience of this approach or, even better, some good reasons why to do it / not to do it. Any input appreciated Regards Basiclife

    Read the article

  • Markup filter wanted for a public website

    - by sibidiba
    Developing a community site where everyone can post text, I'm looking for a markup filter: What is not part of the markup must be escaped (htmlspecialchars()) as it is. Should turn URL-s automatically into links Should support some form of basic markups (bold, image, url, pre, list) Should have a simple parser, that turns user input text into HTML Content on the site is public to everyone, XSS must not allowed to happen. What do you suggest? What markup language in the first place? BBCode? Wiki? Markdown? Are there any complete API-s with good examples? PHP is available on the server side. If there is a WYSIWYG-like texarea in addition (like here on SO) that would be a fantastic bonus!

    Read the article

  • .Net website create directory to remote server access denied

    - by tmfkmoney
    I have a web application that creates directories. The application works fine when creating a directory on the web server, however, it does not work when it tries to create a directory on our remote fileserver. The fileserver and the webserver are in the same domain. I have created a local user in our domain, "DOMAIN\aspnet". The local user is on both servers. I am running my .Net app pool under the domain user. I have also tried using windows impersonate in the web.config to run under the domain user. I have verified that the domain user has full control to the remote directory. In an effort to debug this I have also given the "everyone" full control to the remote directory. In an effort to debug this I have also added the domain user to the administrators group. I have a simple .net test page on the web server to test this. Through the test page I am able to read the directory on the file server and get a list of everything in it. I am not able to upload files or to create directories on the file server. Here's code that works: var path = @"\\fileserver\images\"; var di = new DirectoryInfo(path); foreach (var d in di.GetDirectories()) { Response.Write(d.Name); } Here's code that doesn't work: path = Path.Combine(path, "NewDirectory"); Directory.CreateDirectory(path); Here's the error I'm getting: Access to the path '\fileserver\images\NewDirectory' is denied. I'm pretty stuck on this. Any ideas?

    Read the article

  • Cannot see my wordpress website on google search

    - by ion
    Hi guys I recently uploaded a site made with wordpress. The site url is oakabeachvolley.gr I have set on the privacy settings of wordpress for the site to be visible by search engines. However after almost 45 days the site is invisible on google even when I'm searching using the url name and very specific keywords. Since I have made quite a few sites with wordpress I have never seen this behavior before. Sites will eventually be visible to google engine, sometimes even in the first day. However in this case the site does not show nowhere in the first 20 pages. Any help would be greatly appreciated.

    Read the article

  • share the same cookie between two website using PHP cURL extension

    - by powerboy
    I want to get the contents of some emails in my gmail account. I would like to use the PHP cURL extension to do this. I followed these steps in my first try: In the PHP code, output the contents of https://www.google.com/accounts/ServiceLoginAuth. In the browser, the user input username and password to login. In the PHP code, save cookies in a file named cookie.txt. In the PHP code, send request to https://mail.google.com/ along with cookies retrieved from cookie.txt and output the contents. The following code does not work: $login_url = 'https://www.google.com/accounts/ServiceLoginAuth'; $gmail_url = 'https://mail.google.com/'; $cookie_file = dirname(__FILE__) . '/cookie.txt'; $ch = curl_init(); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file); curl_setopt($ch, CURLOPT_URL, $login_url); $output = curl_exec($ch); echo $output; curl_setopt($ch, CURLOPT_URL, $gmail_url); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file); $output = curl_exec($ch); echo $output; curl_close($ch);

    Read the article

  • simple blog engine for website

    - by Alexander
    I know that there's a lot of free open source blog engine out there such as BlogEngine.NET. However that's an overkill for my purpose... I so far has created my own simple one by storing posts in a .xml file and so every time the main page loads it reads from all those xml files and displays it as posts. Now my problem is when a user clicks on a post title I want it to show on a new page(.aspx), so if the title is X then I want a new page called X.aspx when the user clicks on the title on the homepage. I hope this makes sense. My question is how do I create such thing?

    Read the article

  • Website url whitelists

    - by buggedcom
    I'm building a user content parser and am adding an automatic link parser. I'm adding a dialogue, that confirms that the user wants to go to the particular site being linked to. This is for two reasons. Anti phishing and spam combating. However I want to be able to disable both the dialogue and nofollow additions with commonly used websites, so I'm building a whitelist. Are there any common whitelists about or should I start building one from scratch?

    Read the article

  • Problem in integrating Wordpress blog's in Cakephp Website

    - by Nishant
    Hello Everyone, I was working on a site of Cakephp which was successfully delivered.But recently Client again appered and asked me to put the Wordpress blog in it,to cover up the Blogging thing in his site.He wants to share the authentication between the Cakephp and WP.Whoever registers in his site,then Logins in it and if he clicks on the Blog Tab,he must be redirected to the WP blog with the session still there.After some googling I have installed it in /app/webroot/blog folder but I am not able to edit the .htaccess file. Please help me in the right direction,that how to share the authentication betwenn Cake Php and Wordpress, and the second one how to customize the .htaccess file so that URL's look good. Thanks in advance..!

    Read the article

  • FormsAuthentication redirecting to login page when visiting root of website

    - by Ryan Lattimer
    I wanted to use FormsAuthentication to secure my static files as well on my site, so I followed the instructions located here http://learn.iis.net/page.aspx/244/how-to-take-advantage-of-the-iis7-integrated-pipeline/ under title "Enabling Forms Authentication for the Entire Application". Now though, when I try to visit the site by going directly to http://www.mysite.com I get redirected to http://www.mysite.com/Login.aspx?ReturnUrl=%2f instead of it using my DefaultDocument I have set. I can go to my default document by just visiting http://www.mysite.com/Home.aspx without any issues because it is set to allow anonymous access. Is there something I need to add into my web.config file to make iis7 allow anonymous access to the root? I tried adding with anonymous access but no such luck. Any help would be much appreciated. Both Home and the Login form allow anonymous. <location path="Home.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="Login.aspx"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> Login form is set as the loginUrl <authentication mode="Forms"> <forms protection="All" loginUrl="Login.aspx"> </forms> </authentication> Default document is set as Home.aspx <defaultDocument> <files> <add value="Home.aspx" /> </files> </defaultDocument> I have not removed any of the iis7 default documents. However, Home.aspx is first in the priority.

    Read the article

  • How to parse a custom XML-style error code response from a website

    - by user1870127
    I'm developing a program that queries and prints out open data from the local transit authority, which is returned in the form of an XML response. Normally, when there are buses scheduled to run in the next few hours (and in other typical situations), the XML response generated by the page is handled correctly by the java.net.URLConnection.getInputStream() function, and I am able to print the individual results afterwards. The problem is when the buses are NOT running, or when some other problem with my queries develops after it is sent to the transit authority's web server. When the authority developed their service, they came up with their own unique error response codes, which are also sent as XMLs. For example, one of these error messages might look like this: <Error xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <Code>3005</Code> <Message>Sorry, no stop estimates found for given values.</Message> </Error> (This code and similar is all that I receive from the transit authority in such situations.) However, it appears that URLConnection.getInputStream() and some of its siblings are unable to interpret this custom code as a "valid" response that I can handle and print out as an error message. Instead, they give me a more generic HTTP/1.1 404 Not Found error. This problem cascades into my program which then prints out a java.io.FileNotFoundException error pointing to the offending input stream. My question is therefore two-fold: 1. Is there a way to retrieve, parse, and print a custom XML-formatted error code sent by a web service using the plugins that are available in Java? 2. If the above is not possible, what other tools should I use or develop to handle such custom codes as described?

    Read the article

  • Website revision control system

    - by kylex
    I'm looking for the ability to use a revision control system for websites, but ALSO have the revisions go live immediately. Example: A developer submits to the repository, those changes are live immediately pulled from the repository. Any suggestions on available software?

    Read the article

  • Block specific IP block from my website in PHP

    - by iTayb
    I'd like, for example, block every IP from base 89.95 (89.95..). I don't have .htaccess files on my server, so I'll have to do it with PHP. if ($_SERVER['REMOTE_ADDR'] == "89.95.25.37") die(); Would block specific IP. How can I block entire IP blocks? Thank you very much.

    Read the article

  • make website page get news articles from feeds

    - by Andy
    Hi, I would like to start publishing some news articles from time to time. An option would be to create a new web page for every article but this sounds like it can be done easier. For instance I noticed clear channel uses some kind of feed, like: http://www.z100.com/cc-common/news/sections/newsarticle.html?feed=104650&article=7011404 I don't know what it means and how it works but looks like a nice way to get the article from some kind of database I guess? Does someone knows how this works, or some other way to create new articles etc, some dynamical functionality for instance to generate links to the articles and get the right article from a database for example. thanks!

    Read the article

  • What movie website allows people to scrape it?

    - by Sergio Tapia
    I've wanted to make a C# library to scrape movie information and return it to the application, but someone told me that it's against the TOS. RottenTomatoes seems to have no problems with it from what I've read on their licensing page, but I'm not quite sure. Where could I aquire movie information legally and without cost? It's for an open source application hosted here: LINK

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >