Search Results

Search found 30217 results on 1209 pages for 'website performance'.

Page 183/1209 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • continuous music on website

    - by Patrick
    Hi all, I'm against it as I'm sure ALL of you are, but my client wants background music on their website. I'm very new to this, so was wondering how should I do that? I know I should use iframes, but what's the actual way of using them? eg: do I just create the home page with 2 frames (one for the music, one for the rest of the website), and then every time the user clicks on a link I can load the usual destination page - or should I update all pages in some way to make sure they are 'frames enabled'? Also, I do I style the frame to make sure it's hidden? thanks, Patrick ps please don't reopen the discussion about why background music is not good - I do know that and personally hate it. But the client is adamant and paying for it so... ;)

    Read the article

  • Win 2008 R2 - copying TO disk is very slow, copying FROM is more or less okay

    - by avs099
    I have Windows 2008 R2 SP1 with 4 identical SATA disks (Seagate Barracude 7200) in RAID 5 array. It has 4Gb of memory; all recent updates are installed. Problem: when I copy large file from one folder to another, I get about 10MB/s average speed. When I read this file from network share via 1Gbps connection - I get about 25-30 MB/s. Both numbers seems to be low for me - but specifically I'm very frustrated with low write speed. there is no antivirus, no hyper-v, it's just a fileserver - i when i do my tests nobody else reads/write from it (we have only 4 people in a team, so I'm sure). Not sure if that matters, but there is only 1 logic disk "C" with all available space (1400 GB). I'm not an admin at all, so I have no idea where to look and what other information to provide. I did run performance monitor with "% idle time", "avg bytes read", "avg byte write" - here is the screenshot: I'm not sure why there are such obvious spikes. Any idea? Please let me know if you need me to provide more information - what counters should I check, etc. I'm very eager to get this solved. Thank you. UPDATE: we have another Windows 2008 R2 SP1 server with 2 RAID1 arrays - one is disk C (where windows is installed, another one is disk E). It is running Hyper-V and does not have antivirus. I noticed the following behavior when I copy large file (few GBs): C - C: about 50MB/sec C - E: about 55MB/sec E - E: 8MB/sec!!! E - C: 8MB/sec!!! what could cause this?? E drive is RAID1 array from same Seagate Barracuda 1TB drives..

    Read the article

  • Python Framework for small website

    - by mvid
    I am planning a small, simple website to showcase myself as an engineer. My preferred language is Python and I hope to use it to create my website. My pages will be mostly static, with some database stored posts/links. The site will be simple, but I would like to have freedom in how it operates. I plan on using CSS/JS for the design, so I really just need an easy way to throw a small amount of content around. Some frameworks I have come across: Flask cherry.py Pinax Are there any suggestions? Does anyone have any experience with Python on small/hobby websites?

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

  • Best Static Website Generator

    - by Nick Retallack
    In the age of dynamic websites built with layouts and templates, nobody wants to write plain old repetitive static html anymore. But now that you can outsource dynamic features to services like Disqus, and you could get slashdotted/dugg/reddited at any moment, sometimes a static website is best for scalability. There are quite a few static website generators out there that let you use templates, layouts, alternative markup languages, and other new age stuff. So this question is a bit of a survey. Which do you think is the best, and why? Here are a few examples to start us off: WebGen StaticMatic Static

    Read the article

  • Setting the Hostname in IIS Bindings Breaks Website

    - by Josh
    I just got a Windows Server 2008 VPS and I'm having trouble getting IIS7 setup. I created a new website in IIS with the path, ip address, and hostname (like 'www.nameofsite.com') and click OK. When I browse to the site it pulls up "http://www.nameofsite.com" in the browser and... nothing... IE cannot display this webpage. If I blank out the hostname in the bindings and click [Browse] it works fine (it takes me to http://10.10.2.92 - the computer's ip). So entering the hostname breaks the website. Any ideas on what I'm missing? Services I might not have running or roles I'm missing? No server roles were initially installed on the VPS so I installed IIS, DHCP, DNS, and Application Server... overkill, but I wasn't sure what to install.

    Read the article

  • Basics for implementing SSL on PHP Website

    - by KoolKabin
    Hi guys, I am here as a developer of a website. My website got different modules among which one function is to process credit card. In order to process credit card I need to implement SSL layer and process the pages. For rest of modules the SSL is optional. Now my points are: 1.) Is the location of file for http and https same? 2.) Can the session of http and https be shared? this is required as i need user login information and cart item information.

    Read the article

  • virtual disk image - file or partition

    - by tylerl
    I'm looking at the differences between using a file versus a partition to store a virtual disk image in VM use. The common knowledge is that partition-based images are faster than file-based images because of a decreased overhead. It makes sense, but I've never seen any actual numbers. My own testing bears out a different result. When I benchmark a direct-to-partition virtual disk, then format that same partition with ext4, create a virtual disk image stored on that ext4 filesystem, and then benchmark that, I see no speedup at all for the direct-to-partition virtual disk. Instead on some systems the file-based image is even faster (possibly due to host OS caching or something like that). This test was repeated many times on many systems, with fairly consistent results. So perhaps throwing out the performance justification, is it still considered better to use a partition rather than a virtual disk image? Is there some other reason why direct partition access is better than image files? Or perhaps is there some reason to go the other way around? Perhaps an advantage in one of the virtual disk file formats that you don't get with raw partition images?

    Read the article

  • What does "warning: unable to unlink website: Operation not permitted" mean when checking out a Git

    - by James A. Rosen
    I'm trying to create a local branch that tracks a remote branch. Here's what I get: > git checkout master > git push origin origin:refs/heads/myBranch Total 0 (delta 0), reused 0 (delta 0) To [email protected]:myrepo/myproject.git * [new branch] origin/HEAD -> myBranch > git fetch origin > git checkout --track -b myBranch origin/myBranch warning: unable to unlink website: Operation not permitted Branch myBranch set up to track remote branch myBranch from origin. Switched to a new branch 'myBranch' What does "warning: unable to unlink website: Operation not permitted" mean? Did everything work fine?

    Read the article

  • Webfaction: How do I run a Static/Perl app and Django app under the same website

    - by swisstony
    I have an existing Perl app that I'm moving to a Webfaction website. I will be adding Django apps to this Webfaction website too. I would like the Django app to get first call and so would want its URL path to be / This would allow me to add any new URLs to the urls.py I wish as my app grows. If the URL doesn't match anything in the urls.py I would like it to get passed to the static Perl app. For example /app1 - Django /app2 - Django Everything else not picked up by urls.py I would want going to my Perl app For example: /index.html - Static/Perl app /about.html - Static/Perl app /contact.html - Static/Perl app /apps/perlapp1.cgi - Static/Perl app etc How do I go about achieving this in Webfaction?

    Read the article

  • Drupal - Lightbox -> iframe node displaying entire website with views

    - by kilrizzy
    I am attempting to make a view that would list thumbnails of my projects, then when clicking them enlarge the photo using lightbox and list out some text and a link to the website. I am not sure if there is a way I can just add text to the lightbox using views so right now I have it using a field for Lightbox2 iframe: thumb200wh-node page. Open my entire website again in the lightbox instead of just the node: http://jeffkilroy.com/portfolio_boxes Is there a way to just display the node from the views module or is there a way to just use an image but modify the description so that I can put text in?

    Read the article

  • Jad file download link for my website

    - by Jareim
    Hi sir/madam! I am putting up a small website via webs.com for me and my friends that could also be accessible via wap, i.e. mobile internet, and I want to add links to my site that downloads .jar files. I uploaded the files on my site, and links to the .jar files went fine, but I also need links for .jad files (for some mobile phones that require .jad files FIRST then .jar). I tried doing a regular link for the .jad files, but it simply displayed the content of the .jad file. It wasn't installed or downloaded. What should I do? Or am I at a wrong website? Thanks!

    Read the article

  • How to give you website customers a secure feeling

    - by Saif Bechan
    I was wondering how you can give your website customers the confidence that you are not tinkering with the database values. I am planning on running a website which falls in the realm of an online game. There is some kind of credit system involved that people have to pay for. Now I was wondering how sites like this ensure there customers that there is no foul play in the database itself. As I am the database admin i can pretty much change all the values from within without anyone knowing i did. Hence letting someone win that does not rightfully is the winner. Is it maybe an option to decrypt en encrypt the credits people have so i can't change them. Or is there maybe a company i can hire that checks my company for foul play.

    Read the article

  • preformance wise htaccess

    - by purpler
    hese's the my htaccess template, i wonder if anything could be added to increase website performance.. # Defaults AddDefaultCharset UTF-8 DefaultLanguage en-US ServerSignature Off FileETag None Header unset ETag Options -MultiViews #Options All -Indexes # Force the latest IE version or ChromeFrame <IfModule mod_setenvif.c> <IfModule mod_headers.c> BrowserMatch MSIE ie Header set X-UA-Compatible "IE=Edge,chrome=1" env=ie </IfModule> </IfModule> # Proxy X-UA Setup <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> #Rewrites Options +FollowSymlinks RewriteEngine On RewriteBase / # Redirect to non-WWW RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC] RewriteRule ^(.*)$ http://%1/$1 [R=301,L] # Redirect to WWW RewriteCond %{HTTP_HOST} ^domain.com RewriteRule (.*) http://www.domain.com/$1 [R=301,L] # Redirect index to root RewriteRule ^(.*)index\.(php|html)$ /$1 [R=301,L] # Caching ExpiresActive On ExpiresDefault A0 Header set Cache-Control "public" # 1 Year Long Cache <FilesMatch "\.(flv|fla|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav|png|jpg|jpeg|gif|swf|js|css|ttf|eot|woff|svg|svgz)$"> ExpiresDefault A31622400 </FilesMatch> # Proxy Caching <FilesMatch "\.(css|js|png)$"> ExpiresDefault A31622400 Header set Cache-Control "private" </FilesMatch> # Protect against DOS attacks by limiting file upload size LimitRequestBody 10240000 # Proper SVG serving AddType image/svg+xml svg svgz AddEncoding gzip svgz # GZip Compression <IfModule mod_deflate.c> <FilesMatch "\.(php|html|css|js|xml|txt|ttf|otf|eot|svg)$" > SetOutputFilter DEFLATE </FilesMatch> </IfModule> # Error page ErrorDocument 404 /404.html # Deny access to sensitive files <FilesMatch "\.(htaccess|ini|log|psd)$"> Order Allow,Deny Deny from all </FilesMatch>

    Read the article

  • Fast distributed filesystem for a large amounts of data with metadata in database

    - by undefined hero
    My project uses several processing machines and one storage machine. Currently storage organized with a MSSQL filetable shared folder. Every file in storage have some metadata in database. Processing machines executes tasks for which they needed files from storage and their metadata. After completing task, processing machine puts resulting data back in storage. From there its taken by another processing machine, which also generates some file and put it back in storage. And etc. Everything was fine, but as number of processing machines increases, I found myself bottlenecked myself with storage machines hard drive performance. So I want processing machines to put files in distributed FS. to lift load from storage machines, from which they can take data from each other, not only storage machine. Can You suggest a particular distributed FS which meets my needs? Or there is another way to solve this problem, without it? Amounts of data in FS in one time are like several terabytes. (storage can handle this, but processors cannot). Data consistence is critical. Read write policy is: once file is written - its constant and may be only removed, but not modified. My current platform is Windows, but I'm ready to switch it, if there is a substantially more convenient solution on another one.

    Read the article

  • Deploying only changed part of a website with git to ftp (svn2web for git)

    - by Elazar Leibovich
    I'm having a website with many big images file. The source (as well as the images) is maintained with git. I wish to deploy that via ftp to a bluehost-like cheap server. I do not wish to deploy all the website each time (so that I won't have to upload too many unchanged files over and over), but to do roughly the following: In a git repository, mark the last deployed revision with a tag "deployed". When I say "deploy revision X", find out which files has changed between revision X and revision tagged as deploy, and upload just them. It is similar in spirit to svn2web. But I want that for DVCS. Mercurial alternative will be considered. It's a pretty simple script to write, but I'd rather not to reinvent the wheel if there's some similar script on the web. Capistrano and fab seems to know only how to push the whole revision, in their SCM integration. So I don't think I can currently use them.

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Should I partition a 1TB Hard Disk whose primary use is media storage?

    - by Senthil
    I am going to get a 1TB hard disk. I will be storing 1080p or 720p movies, high-bitrate music and pictures in it. I use my PC 90% of the time only to play/listen/see those. I am running out of space in my current HD so I am getting another one. My specs are 2.7GHz Dual Core, 512MB GeForce 9400GT, 2GB DDR2 RAM and all the proper matroska codecs/players. I guess that is enough to play 1080p movies withough a glitch, given an ideal hard disk. I've read about proper partitioning giving performance improvement etc.. I don't want my hard disk to be the bottleneck. Can someone tell me whether I should partition my 1TB hard disk into many drives? If I should, what is the ideal size of each partition? Smooth playing of movies is very important to me. Once I start filling up the disk, there is no turning back. So I want to get it right before I start. Thanks.

    Read the article

  • How important is dual-gigabit lan for a super user's home NAS?

    - by Andrew
    Long story short: I'm building my own home server based on Ubuntu with 4 drives in RAID 10. Its primary purpose will be NAS and backup. Would I be making a terrible mistake by building a NAS Server with a single Gigabit NIC? Long story long: I know the absolute max I can get out of a single Gigabit port is 125MB/s, and I want this NAS to be able to handle up to 6 computers accessing files simultaneously, with up to two of them streaming video. With Ubuntu NIC-bonding and the performance of RAID 10, I can theoretically double my throughput and achieve 250MB/s (ok, not really, but it would be faster). The drives have an average read throughput of 83.87MB/s according to Tom's Hardware. The unit itself will be based on the Chenbro ES34069-BK-180 case. With my current hardware choices, it'll have this motherboard with a Core i3 CPU and 8GB of RAM. Overkill, I know, but this server will be doing other things as well (like transcoding video). Unfortunately, the only Mini-ITX boards I can find with dual-gigabit and 6 SATA ports are Intel Atom-based, and I need more processing power than an Atom has to offer. I would love to find a board with 6 SATA ports and two Gigabit LAN ports that supports a Core i3 CPU. So far, my search has come up empty. Thus, my dilemma. Should I hold out for such a board, go with an Atom-based solution, or stick with my current single-gigabit configuration? I know there are consumer NAS units with just one gigabit interface (probably most of them), but I think I will demand a lot more from my server than the average home user. Any advice is appreciated. Thanks.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Facebook Social Plugins Like button returns "Website Inaccessible"

    - by buggedcom
    We've just added Facebook like button to http://www.willyoung.co.uk/global/songs-and-lyrics/releases/the_hits?page=1, however the json returned by facebook is for (;;);{"error":0,"errorSummary":"","errorDescription":"","errorIsWarning":false,"silentError":0,"payload":{"requires_login":false,"success":false,"already_connected":false,"is_admin":false,"show_error":true,"error_info":{"brief":"Website inaccessible","full":"The page at http:\/\/www.willyoung.co.uk\/global\/songs-and-lyrics\/releases\/the_hits could not be reached."}}} It basically says that the website is not accessible. We've implemented this on other sites and it's fine. I'm not really sure where this is coming from. Any ideas?

    Read the article

  • Windows Server 2012 Hyper-V very slow

    - by Matt Taylor
    I have been running several Hyper-V VMs on Windows Server 2008 R2 for the past couple of years and enjoying perfectly adequate performance for my testing/development/r&d environments. I'm a software developer so my hardware knowledge is basic however I built the rig using: •Gigabyte GA-X58A-UD3R Intel X58 (Socket 1366) DDR3 Motherboard •Intel Core i7 960 3.20GHz (Bloomfield) (Socket LGA1366) •24GB triple channel RAM The host OS is running on an OCZ SSD and all the VMs are running on a 2TB Marvell SATA3 RAID 0 array consisting of 2 Western Digital Caviar Black 7,200rpm drives. I have tested the speed of the 2TB drive and appear to be getting less than 3Mbs but it can adequately run a 4 VM farm including a DC, (SQL) database and IIS application servers. I recently upgraded the SSD on which the host runs to a 256GB OCZ Vertex 4 and took the opportunity to upgrade to Windows Server 2012 and installed the Hyper-V role. I tried importing one of my existing Windows Server 2008 R2 VMs (and converted it to .vhdx) plus I have tried creating a brand new Windows Server 2008 R2 VM but both are running extremely slowly and I can see nothing obvious using the host and guest Task Manager/Resource Monitor tools. In both cases the VM has 8GB RAM (fixed), 4 CPUs, fixed size HD (not expanding) and is using an external virtual network running on a separate NIC to the host. I have upgraded the BIOS to the latest available version and checked the virtualization settings. I have run out of "obvious" (to a developer) things to check/configure and my next option will be to re-install the host OS but before I do I would very much appreciate any advice from any experts out there. Thanks

    Read the article

  • Update website with a single command (git push) instead of FTP drag and dropping

    - by Wolfr
    Situation: I have a local copy of a website I have a server that I have SSH access to What do I want to do? Commit locally until I'm happy with my code Make branches locally Have one master branch that is the one that should be pushed to the server Update the website using a single command (git push origin master) If I set up a git repo locally using git init, and then push to a folder on the server, it doesn't work. When I FTP to the server to check the files, they're actually there. When I SSH into the server and do git status, it's not clean, even though it should be since I just pushed to the server. Steps I'm doing: Make a new folder on my computer (mkdir folder_x) Go into that folder (cd folder_x) Set up a new git repository there (git init) (git repository sets up successfully) Push the repository to the server using git push origin master (where origin is set up as user:[email protected])

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >