Search Results

Search found 31417 results on 1257 pages for 'site structure'.

Page 364/1257 | < Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >

  • Basic Information For Lead Generation

    Online Lead Generation has a very transparent cost structure. It is straightforward to see each lead's origins and quality - and companies can then pay only for data on interested consumers that meet their criteria. This makes the service highly cost-effective and gives each lead higher value.

    Read the article

  • Shutdown/logoff script in Ubuntu 13.10

    - by TNT
    Which would be the best way to run a script upon GUI logoff, shutdown, hibernate, sleep modes? In 12.04 I think I did this in /etc/lightdm/lightdm.config, but on 13.10, the folder structure changed and when I create this script, The display manager wont even start upon boot. I am looking to implement a simple automatic truecrypt unmount command truecrypt -d but of course this would go for any script.

    Read the article

  • Why Is it better to use unreadable bytes for client server communication?

    - by Alessa
    I'm composing communication lyrics for client-server and what am I thinking about: "authme username passord" (maybe encrytped) "accept" "get archive of H2O from 03.02.2005 to 20.12.2064" transferring binary structure or "error descrtiption" why I always need to do something like 0x0FA52FD + CRC 0x0D34423 + CRC ... I can see some secure reasons but I think it's not the real reason so why I can't use strings in client-server communication?

    Read the article

  • Restricting permissions to individual documents on SharePoint

    - by wahle509
    Here's what I'm trying to do: I would like to create a list of documents on a site in my company's SharePoint site. Each document should have specific user's permissions to view and edit it. For example: The list is for performance reports. John has his out there called "John_PR_09.docx". Only him and his supervisor should have permissions to view, edit, or do anything to it. And then another employee has hers out there with permissions for only her and her supervisor, and so on... I have tested this out with a document that I removed the groups and users from (since they inherit permissions from it's parent) and only gave my user account permissions to. I then asked someone else to try and open and she could, she even wrote "TEST" on the document and saved it. What am I doing wrong? I thought I stopped it from inheriting permissions from it's parent and only gave myself rights to edit it.

    Read the article

  • How to fix the “Live INT automatically logs out”

    - by ybbest
    Problem: Live INT environment automatically logs out I am trying to setup the Authentication with Windows Live ID and followed this blog post ; I have a problem logging in to live INT web site. Whenever I try to log in (https://login.live-int.com/login.srf  this is the internal Live environment to be used in a dev. environment.), after entering valid email/password I get redirected to the logout page. I tried 2 different accounts (one with existing email address, and other one with newly created @hotmail-int.com address) and 3 different browsers so I’m sure that neither account nor the browser are the cause of this. I also tried to enter wrong password, and in that case I get the message that the password is wrong. Solution: All you need is the unique ID in order to add the user to SharePoint , you can get the ID without logging into the Live INT environment. I think the Live internal environment is not working correctly for some reasons , the reason I need to login to the Live internal environment is that I need to get the unique ID for the test account so that I can add the user to SharePoint. All the blogs I have come across require you to login in order to get the unique ID. However, I figured out another way of getting the unique ID without logging in. Steps are below: Register a new test account in the Live internal environment. Go to the SharePoint site collection that has  Live ID authentication enabled and select the LiveID INT(it will be different as you could name it differently when you set up the authentication provider) from the dropdown. Try login using the Internal Live account, you will get an Access Denied Error as below showing your  unique ID for the test account. Add that account to your SharePoint Group, boom, it works. I hope it will help anyone who needs to do this stuff in the future.

    Read the article

  • Lost Windows 7 files

    - by Pader
    My intention was to have a dual boot system with Ubuntu and Windows 7. Obviously I did something wrong because although I had a system menu on booting (is it normal to appear DOS-like?) which gave me an option of booting into windows 7, I was unable to do so. Also, when I booted into Ubuntu, my Windows 7 drive was not available. The Windows 7 drive was an internal 1TB drive partitioned into a 200GB (OS) and a second partition making up the remainder. I was still unable to access this Windows 7 drive even after deleting Ubuntu as I kept getting an 'requires an NTFS drive' error, or something similar. I could not even re-install Windows 7 as the disk was not recognised. I did eventually get the drive back by but I cannot for the life of me remember how. I did try to recover my lost W7 data using Ontrack Easy Recovery (which has always been succesfull in the past for post format recovery) but it would not recognise the 1TB although it was now formatted as NTFS. From other posts on this site, I gather that this is considered a 'Windows 7 Site' problem by Linux users. However, I would dearly love to recover some of my lost Windows 7 files. I had resigned myself to a lot of lost personal data but I happened to notice that a 2TB drive I had connected through a USB docking station had been repartitioned. It must have happened when I installed Ubuntu as I can think of no other explanation. I certainly do not remember consciously requiring Ubuntu to do this. The additional two partitions on the 2TB drive, the original Windows

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Apache rewrite rule to remove index.php and direct certain areas to https

    - by Stephen Martin
    I have a codeignitor application running on Apache2, I have managed to remove the index.php from the urls with this .htaccess RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] now I want to make certain parts of the site redirect to https, I tried this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule .* index.php/$0 [PT,L] RewriteRule ^/?cpanel/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] RewriteRule ^/?login/(.*) https://%{SERVER_NAME}/cpanel/$1 [R,L] But it doesn't work. I have to say when it comes to Apache rewrites im a noob. I can't find any tutorials on how to remove index.php and rewrite/redirect certain parts of the site to https. Any ideas, Thanks.

    Read the article

  • Apache only logs PHP errors if LogLevel is set to debug

    - by Sudowned
    I'm developing a CodeIgniter application and for reasons that I do not fully understand errors have stopped being logged in the file specified in the Apache site conf. The page I'm testing is definitely generating a 500 error, but that is not reflected in the logs unless I set LogLevel debug. Setting LogLevel to error or warn results in no errors being logged. I don't think this is a CI issue because I've been developing this site for close to a week now and errors have been logged as expected until I picked the project up again this morning. Though for what it's worth, I've got: error_reporting(E_ALL); set in my index.php.

    Read the article

  • Do I need "cube subclasses" to represent the blocks in a Minecraft-like world?

    - by stighy
    I would like to try to develop a very simple game like Minecraft for my own education. My main problem at the moment is figuring out how to model classes that represent the world, which will be made of blocks of various types (such as dirt, stone and sand). I am thinking of creating the following class structure: Cube (with proprerties like color, strength, flammable, gravity) with subclasses: Dirt Stone Sand et cetera My question is, do I need the Cube subclasses or a single class Cube sufficient?

    Read the article

  • Is browser and bot whitelisting a practical approach?

    - by Sn3akyP3t3
    With blacklisting it takes plenty of time to monitor events to uncover undesirable behavior and then taking corrective action. I would like to avoid that daily drudgery if possible. I'm thinking whitelisting would be the answer, but I'm unsure if that is a wise approach due to the nature of deny all, allow only a few. Eventually someone out there will be blocked unintentionally is my fear. Even so, whitelisting would also block plenty of undesired traffic to pay per use items such as the Google Custom Search API as well as preserve bandwidth and my sanity. I'm not running Apache, but the idea would be the same I'm assuming. I would essentially be depending on the User Agent identifier to determine who is allowed to visit. I've tried to take into account for accessibility because some web browsers are more geared for those with disabilities although I'm not aware of any specific ones at the moment. The need to not depend on whitelisting alone to keep the site away from harm is fully understood. Other means to protect the site still need to be in place. I intend to have a honeypot, checkbox CAPTCHA, use of OWASP ESAPI, and blacklisting previous known bad IP addresses.

    Read the article

  • Acronis restore Wubi Ubuntu 12.04 parition with error

    - by user287082
    I'm on Win 8.1, then I download ubuntu-12.04.4-desktop-amd64.iso I mount the iso and copy wubi.exe to the same folder with above iso I run wubi.exe and install to another partition Everything works fine, then I make a backup with Acronis True Image 2013 Today, I use Acronis to restore that backup, after that I boot into Ubuntu and see this error http://i291.photobucket.com/albums/ll293/sniper_awm/2014-05-31_161817_zpsfe7a21c8.png And can see the folder structure of Wubi partition from Win 8.1, I copied root.disk to another place How can I fix this? (Dell 2420)

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • Do I need a VPN to secure communication over a T1 line?

    - by Seth
    I have a dedicated T1 line that runs between my office and my data center. Both ends have public IP addresses. On both ends, we have a T1 routers which connect to SonicWall firewalls. The SonicWalls do a site-to-site VPN and handle the network translation, so the computers on the office network (10.0.100.x) can access the servers in the rack (10.0.103.x). So the question: can I just add a static route to the SonicWalls so each network can access each other with out the VPN? Are there security problems (such as, someone else adding the appropriate static route and being able to access either the office or the datacenter)? Is there another / better way to do it? The reason I'm looking at this is because the T1 is already a pretty small pipe, and having the VPN overhead makes connectivity really slow.

    Read the article

  • Hierarchies in SQL

    One very common structure that needs to be handled in T-SQL is the hierarchy. One of our prominent members of the community discusses how you can handle hierarchies in SQL Server. Join SQL Backup’s 35,000+ customers to compress and strengthen your backups "SQL Backup will be a REAL boost to any DBA lucky enough to use it." Jonathan Allen. Download a free trial now.

    Read the article

  • SSL/https setup for herokuapp.com address rather than my actual domain

    - by new2ruby
    I have a subdomain of my site pointed to a rails app at mysite.herokuapp.com. I bought a certificate from godaddy and seem to have that all set up correctly. So that when I go to: http://mysite.herokuapp.com or http://dev.mysite.com it's redirected to: https://mysite.herokuapp.com or https://dev.mysite.com The problem is that when I visit dev.mysite.com, I get the error: Safari can't verify the identity of the website. But when I go to mysite.herokuapp.com, I don't get the error. I wanted this to be set up the other way, so that dev.mysite.com did not cause the error. I'm not sure where I went wrong. I used dev.mysite.com when generating the key and when setting it up at godaddy.com. Any ideas where I should look? P.S. The old site is hosted at dreamhost and the DNS info is stored there as well. So I created a subdomain there of type cname which points to mysite.herokuapp.com.

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Server problem (duh)

    - by j-t-s
    Sorry the title couldn't be more specific. I installed Abyss Web Server. I'm running Windows XP Home Edition and I have Wireless Mobile Broadband internet. I used to be able to access (and other people on other networks) my site by entering my ip address in the browser, but after I formatted, and the installed abyss web server again, this does not work anymore. There are no errors. I CAN visit my own site by entering my ip address BUT anybody else can't do the same, it just says "connecting" in the browser's statusbar and it never changes. I have consulted the docs and have found no help. Google hasn't helped with this problem either. Can somebody please help? Thank you :)

    Read the article

  • Exim: Change sender address when sending mails out of local network

    - by Esa Varemo
    We have a working exim setup at a site, where users can send and receive mails. We are trying to setup a server to send some warnings and errors using email to an address that is outside the local network. The problem is: The program that sends the mails sends them using the username it runs under and the local hostname of the server. This cause the mails to have a sender of format: [email protected]. Exim sends these mails to the ISP's SMTP server, which rejects the mails as they have an illegal or unverifiable sender (the internal address). I'm thinking I should configure exim to rewrite the sender when: - sender's domain is on the local network - receiver's domain is outside the local network I tried setting some kind of rewriting in the exim config, but did not manage to get it to work. I'd show what I have tried, but I ran out of time on the last visit to the site, and had to revert to the original version losing all the changes I tried.

    Read the article

  • Dreamweaver Files uploaded to Win 2008 server cause login prompt

    - by Lil
    I have a customer who uses a 4 year old version of Dreamweaver to edit her webpages. My hosting reseller account is with a company that uses Windows Server 2008. Every time my customer edits a page and uploads it, I have to set the permissions for that file to be readable, manually from the site's control panel. The customer is furious with me because her files cause the login prompt. I am able to upload files myself that remain readable to the site with both Filezilla and with Frontpage. I am assuming that her Dreamweaver settings are the cause of the problem but I don't have that program myself and don't know what to advise her. Any suggestions?

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • Recommendation for a non-standard SSL port

    - by onurs
    Hey guys, On our server I have a single IP, and need to host 2 different SSL sites. Sites have different owners so have different SSL certificates, and can't share the same certificate with SAN. So as a last resort I have modified the web application to give the ability to use a specified port for secure pages. For its simple look I used port 200. However I'm worried about some visitors may be unable to see the site because of their firewalls / proxies blocking the port for ssl connections. I heard some people were unable to see the website, a home user and someone from an enterprise company, don't know if this was the reason. So, any recommendations for a non-standard SSL port number (443 is used by the other site) which may work for visitors better than port 200 ? Like 8080 or 8443 perhaps? Thanks!

    Read the article

  • Is it possible to upload only files that have been updated into a server?

    - by kamikaze_pilot
    Hi guys, Suppose I have a server accessible via FTP and it hosts websites Suppose I want to edit the website locally so it wont affect the site live, and suppose I edit a whole bunch of files, and I don't want to deal with the hassle of keeping track of which files I've edited all the time... Once I finished editing I want to upload it to the server via FTP....is there some FTP software that automatically detects which files have been edited and have only those files uploaded and overwritten rather than having me manually choosing the files I've edited (and hence having to keep track of edited files) or have me upload the entire site which is a waste of time thanks in advance

    Read the article

< Previous Page | 360 361 362 363 364 365 366 367 368 369 370 371  | Next Page >