Search Results

Search found 111079 results on 4444 pages for 'user generated content'.

Page 267/4444 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Whats the best way of using MVC + ajax (jquery) to load page content, aspx or ascx or both

    - by devzero
    I want to have a menu that when I click replaces the content of a "main" div with content from a mvc view. This works just fine if I use a .aspx page, but any master.page content is then doubled (like the and any css/js). If I do the same but uses a .ascx user control the content is loaded without the extras, but if any browser loads the menu item directly (ie search bot's or someone with JS disabled), the page is displayed without the master.page content. The best solution I've found so far is to create the content as a .ascx page, then have a .aspx page load this if it's called directly from the menu link, while the ajax javascript would modify the link to use only the .ascx. This leads to a lot duplication though, as every user control needs it's own .aspx page. I was wondering if there is any better way of doing this? Could for example the master.page hide everything that's not from the .aspx page if it was called with parameter ?ajax=true?

    Read the article

  • PHP - Setting Database Info

    - by user1710648
    First off, I'm sorry if this shows no code which is not what Stack Overflow is about..But I have no clue where to go on this. I have a basic CMS I made, and I am trying to distribute it. I want to make it so that upon going to /cms/install for example, they set the database info, and different info to integrate into the CMS. Now my issue is, what would be the best method to allow the user to store that database info? A cookie seems to not be the right way..Could I store database info inside of a database? Not too sure where to go on this. More or less. What is the best way to temporarily store the database information the user gave before arrival of the full CMS.

    Read the article

  • Java Script – Content delivery networks (CDN) can bit you in the butt.

    - by Ryan Ternier
    As much as I love the new CDN’s that Google, Microsoft and a few others have publically released, there are some strong gotchas that could come up and bite you in the ass if you’re not careful. But before we jump into that, for those that are not 100% sure what a CDN is (besides Canadian).   Content Delivery Network. A way of distributing your static content across various servers in different physical locations.  Because this static content is stored on many servers around the world, whenever a user needs to access this content, they are given the closest server to their location for this data. Already you can probably see the immediate bonuses to a system like this: Lower bandwidth Even small script files downloaded thousands of times will start to take a noticeable hit on your bandwidth meter. Less connections/hits to your web server which gives better latency If you manage many servers, you don’t need to manually update each server with scripts. A user will download a script for each website they visit. If a user is redirected to many domains/sub-domains within your web site, they might download many copies of the same file. When a system sees multiple requests from the same  domain, they will ignore the download   Those are just a handful of the many bonuses a CDN will give you. And for the average website, a CDN is great choice. Check out the following CDN links for their solutions: Google AJAX Library: http://code.google.com/apis/ajaxlibs/ Microsoft Ajax library: http://www.asp.net/ajaxlibrary/cdn.ashx The Gotcha There is always a catch. Here are some issues I found with using CDN’s that hopefully can help you make your decision. HTTP / HTTPS If you are running a website behind SSL, make sure that when you reference your CDN data that you use https:// vs. http://. If you forget this users will get a very nice message telling them that their secure connection is trying to access unsecure data. For a developer this is fairly simple, but general users will get a bit anxious when seeing this. Trusted Sites Internet Explorer has this really nifty feature that allows users to specify what sites they trust, and by some defaults IE7 only allows trusted sites to be viewed.  No problem, they set your website as trusted. But what about your CDN? If a user sets your websites to trusted, but not the CDN, they will not download those static files. This has the potential to totally break your web site. Pedantic Network Admins This alone is sometimes the killer of projects. However, always be careful when you are going to use a CDN for a professional project. If a network / security admin sees that you’re referencing an outside source, or that a call from a website might hit an outside domain.. panties will be bunched, emails will be spewed out and well, no one wants that.

    Read the article

  • iOS app with a lot of text

    - by rdurand
    I just asked a question on StackOverflow, but I'm thinking that a part of it belongs here, as questions about design pattern are welcomed by the faq. Here is my situation. I have developed almost completely a native iOS app. The last section I need to implement is all the rules of a sport, so that's a lot of text. It has one main level of sections, divided in subsections, containing a lot of structured text (paragraphs, a few pictures, bulleted/numbered lists, tables). I have absolutely no problem with coding, I'm just looking for advice to improve and make the best design pattern possible for my app. My first shot (the last one so far) was a UITableViewController containing the sections, sending the user to another UITableViewController with the subsections of the selected section, and then one strange last UITableViewController where the cells contain UITextViews, sections header help structure the content, etc. What I would like is your advice on how to improve the structure of this section. I'm perfectly ready to destroy/rebuild the whole thing, I'm really lost in my design here.. As I said on SO, I've began to implement a UIWebView in a UIViewController, showing a html page with JQuery Mobile to display the content, and it's fine. My question is more about the 2 views taking the user to that content. I used UITableViewControllers because that's what seemed the most appropriate for a structured hierarchy like this one. But that doesn't seem like the best solution in term of user experience.. What structure / "view-flow" / kind of presentation would you try to implement in my situation? As always, any help would be greatly appreciated! Just so you can understand better the hierarchy, with a simple example : -----> Section 1 -----> SubSection 1.1 -----> Content | -----> SubSection 1.2 -----> Content | -----> SubSection 1.3 -----> Content | | | UINavigationController -------> Section 2 -----> SubSection 2.1 -----> Content | -----> SubSection 2.2 -----> Content | -----> SubSection 2.3 -----> Content | -----> SubSection 2.4 -----> Content | -----> SubSection 2.5 -----> Content | -----> Section 3 -----> SubSection 3.1 -----> Content -----> SubSection 3.2 -----> Content |------------------| |--------------------| |-------------| 1 UITableViewController 3 UITableViewControllers 10 UIViewControllers (3 rows) (with different with a UIWebView number of rows)

    Read the article

  • NTUSER.DAT and UsrClass.dat files building up by the thousands, why and can I delete?

    - by Anthony
    I've noticed that my web server, 2008 Xen VM, gradually loosing free space - more than I would of though from normal use and decided to investigate. There are two problem areas: *C:\Users\Administrator\ (6,755.0 MB)* with files: NTUSER.DAT{randomness}.TMContainer'0000 randomness'.regtrans-ms NTUSER.DAT{randomness}.TM.blf AND C:\Users\Administrator\AppData\Local\Microsoft\Windows\ (6,743.8 MB) with files UsrClass.dat{randomness}.TMContainer'0000 randomness'.regtrans-ms UsrClass.dat{randomness}.TM.blf From what I understand these are in-time backups of registry changes. If that is the case I cannot possibly understand why there would be 10000+ changes. (That's how many files there are per folder location, over 20,000 per folder in total.) The files are using almost 15GB of space and I want rid of them, I'm just wondering can I remove them. However, I need to understand why they are being created so I can avoid this in the future. Any ideas why there would be so many? Is there a way I can check to see what is making the modifications? Are they created with login attempts? Are they created in relation to every day Web Server use? etc. and so on

    Read the article

  • Binary management/delivery

    - by Stan
    Is there any good solution to management server application binaries (may up to 1GB), with the aim of achieving version control and delivery, and have a way to verify that every remote server has same version? Our operating system is Windows Server 2003.

    Read the article

  • Data loss through permissions change?

    - by charliehorse55
    I seem to have deleted some files on my media drive, simply by changing the permissions. The Story I have many operating systems installed on my computer, and constantly switch between them. I bought a 1TB HD and formatted it as HFS+ (not journaled). It worked well between OSX and all of my linux installations while having much better metadata support than NTFS. I never synced the UIDs for my operating systems so the permissions were always doing funny things. Yesterday I tried to fix the permissions by first changing the UIDs of the other operating systems to match OSX, and then changing the file ownership of all files on the drive to match OSX. About 50% of the files on the drive were originally owned by OSX, the other half were owned by the various linux installations. I started to try and change the file permissions for the folders, and that's when it went south. The Commands These commands were run recursively on the one section of the drive. sudo chflags nouchg sudo chflags -N sudo chown myusername sudo chmod 666 sudo chgrp staff The Bad Sometime during the execution of these commands, all of the files belonging to OSX were deleted. If a folder had linux based files it would remain intact but any folder containing exclusively OSX files was erased. If a folder containing linux files also contained a subfolder with only OSX files, the sub folder would remain but is inaccesible and displays a file size of 0 bytes. Luckily these commands were only run on the videos folder, I also have a music folder with the same issue but I did not execute any of these commands on it. Effectively I have examples of the file permissions for all 3 states - the linux files before and after, and the OSX files before. OSX File Before -rw-r--r--@ 1 charliehorse 1000 3634241 15 Nov 2008 /path/to/file com.apple.FinderInfo 32 Linux File before: -rw-r--r--@ 1 charliehorse 1000 5321776 20 Sep 2002 /path/to/file/ com.apple.FinderInfo 32 Linux File After (Read only): (Different file, but I believe the same permissions originally) -rw-rw-rw-@ 1 charliehorse staff 366982610 17 Jun 2008 /path/to/file com.apple.FinderInfo 32 These files still exist so if there are any other commands to run on them to determine what has happened here, I can do that. EDIT Running ls on one of the "empty" deleted OSX folders yields this: ls: .: Permission denied ls: ..: Permission denied ls: subdirA: Permission denied ls: subdirB: Permission denied ls: subdirC: Permission denied ls: subdirD: Permission denied I believe my files might still be there, but the permissions are screwed.

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Sun Grid Engine: Automatically Terminating Idle Interactive Jobs

    - by dmcer
    We're considering using Sun Grid Engine on a small compute cluster. Right now, the current set up is pretty crude and just involves having people ssh to an open machine to run their jobs. We'd like to allow interactive jobs, since that should ease the transition from manually starting jobs to starting them using qsub. But, there is some concern that, if we do, people might accidentally leave their interactive sessions idle and block other jobs from being run on the machines. The issue isn't just theoretical, since we previously tried using OpenPBS and there was a problem with people opening up an interactive job in a screen session and essentially camping on a machine. Is there anyway to configure SGE to automatically kill idle interactive jobs? It looks like this was requested as an enhancement (Issue #:2447) way back in 2007. But, it doesn't seem like the request ever got implemented.

    Read the article

  • Cleaning Up Unused Users and Groups (Ubuntu 10.10 Server)

    - by PhpMyCoder
    Hello experts, I'm very much a beginner when it comes to Ubuntu and I've been learning the ropes by diving in and writing a (backend-language independent) web app framework that relies on apache, some clever mod_rewrites, Ubuntu permissions, groups, and users. One thing that really annoys my inner clean-freak is that there are loads of users and groups that are created when Ubuntu is installed that are never used (Or so I think). Since I'm just running a simple web app server, I would like to know: What users/groups can I remove? Since you'll probably ask for it...here's a list of all the users on my box (excluding the ones I know that I need): root daemon bin sys sync man lp mail uucp proxy backup list irc gnats nobody libuuid syslog And a list of all of the groups: root daemon bin sys adm tty disk lp mail uucp man proxy kmem dialout fax voice cdrom floppy tape sudo audio dip backup operator list irc src gnats shadow utmp video sasl plugdev users nogroup libuuid crontab syslog fuse mlocate ssl-cert lpadmin sambashare admin

    Read the article

  • Hide notification area GPO not applying

    - by Richard
    I have created a GPO to hide the notification area on Windows XP SP3. The GPO must apply to all students but only in certain rooms so I've also enabled loopback processing on the GPO and linked to the OUs the computers are in. I've then added a group to the security filter that contains all student accounts. This is not applying. It doesn't even show up in gpresult. I have also tried linking it in the Students OU which contains all student accounts and applying a security filter with a group of the computers I want it to apply to. This didn't work either. It's possible I'm missing something straightforward. Would a WMI filter do the job, and if so how would I go about writing one so that it'll only apply to computers whose name begins with XX-RT for example.

    Read the article

  • Streamline Active Directory account creation via automated web site

    - by SteveM82
    In my company we have high employee turnover, and hence our helpdesk receives about a dozen requests per week for new Active Directory accounts. Currently, we receive these requests simply via e-mail or voice-mail, and rarely do we have all of the information necessary to create the account. I would like to find a web application that can be used by a manager or supervisor to formalize the requests they make for AD accounts for new employees under their command. Ideally, the application would prompt for all of necessary information, and allow the helpdesk to review the requests and approve or deny each one. If approved, the application would take care of creating the account and send an e-mail to the manager. I have found several application on the Internet that handle self-service account management (i.e., password resets or update contact info), which is also nice to have, but nothing that streamlines the new account request and creation part. Can anyone make suggestions on such an application? Thanks.

    Read the article

  • Nginx static files exclude one or some file extensions

    - by Evgeniy
    I'm serving up a static site via nginx. location ~* \.(avi|bin|bmp|dmg|doc|docx|dpkg|exe|flv|gif|htm|html|ico|ics|img|jpeg|jpg|m2a|m2v|mov|mp3|mp4|mpeg|mpg|msi|pdf|pkg|png|ppt|pptx|ps|rar|rss|rtf|swf|tif|tiff|txt|wmv|xhtml|xls|xml|zip)$ { root /var/www/html1; access_log off; expires 1d; } And my goal is to exclude requests like http://connect1.webinar.ru/converter/task/. Full view is like http://mydomain.tld/converter/task/setComplete/fid/34330/fn/7c2cfed32ec2eef6788e728fa46f7a80.ppt.swf. Despite the fact these URLs ends in such a format they are not static, but fake script requests, so I have a problems with them. What is the best way to do this? How can I add an exclusion for this URL or maybe I can to exclude the specific file exptension (.ppt.swf, pptx.swf) from the list of this Nginx location? Thanks.

    Read the article

  • Windows server 2008 R2 IIS7 file permissions

    - by StealthRT
    Hey all i am trying to figure out why i can not access a index.php file from within the wwwroot/mollify/backend directory. It keeps coming up with this: Server Error 403 - Forbidden: Access is denied. You do not have permission to view this directory or page using the credentials that you supplied. I've given all the permissions (Full control) to the wwwroot directory i could think of (IUSR, Guest, GUESTS, IIS_IUSRS, Users, Administrators, NETWORK, NETWORK SERVICE, SYSTEM, CREATOR OWNER & Everyone). I also added index.php to the "Default Document" under my website settings in IIS 7 manager. What else am i missing? Thanks! David

    Read the article

  • serving static assets via http is really slow compared to sshfs (apache2/nginx)

    - by s1lv3r
    After migrating to a new VPS I had some users complaining about slow loading images on their sites. After creating some test files with dd I realized that I can download all files via sshfs with full speed while downloads via web are painfully slow. The larger the file is and the longer the transfer takes, the slower the transfer speed gets. I thought I had some problems with Apache and just spend the whole evening with replacing Apache2 against nginx for static file serving - with no effect at all. No I/O wait states in top. Tons of RAM free, no high CPU utilization and hdparm shows a decent I/O performance at all times. I just have no idea anymore, what's happening on this server. This is a link to a demo file: http://master.dealux.de/file.tgz Anybody an idea what I can check out?

    Read the article

  • Many users send using a single address, replies to that single address go to many users

    - by Keyslinger
    I work in an office with a Microsoft Exchange server for email. I would like to have the following workflow: John, Mary, or Sam send a message from Outlook on their respective computers. The customer receives the message from the address "[email protected]" The customer replies to the message from [email protected] and it is received by John, Mary, or Sam depending on who sent the message (if it was sent by John, the reply is sent to John, and so on). All users should also be able to send emails from their respective addresses as well (e.g. [email protected], etc.) Is this possible? If so, how can it be accomplished?

    Read the article

  • Test a site with a static subdomain locally

    - by bcmcfc
    How can I test a site that uses one or more static domains for serving images locally? e.g. domain.tld with images servered from static.domain.tld Local working copy of the site on WAMP checked out from SVN: URLs will be pointing at static.domain.tld rather than static.domain.local

    Read the article

  • Roaming Profiles, Folder redirection or... both

    - by Adrian Perez
    Hello, i'm developing a remote desktop services in w2008r2. Now, it's going to be a server, but in the future it's possible that another server could be added to the farm. Now, i'm creating roaming profiles and folder redirection to save space. Now, i have some doubts... if i'm redirecting all the folders i can do through gpo (start menu, desktop, appdata, My Documents, Videos, Music...), does it make sense to use roaming profiles? I mean, i'm redirecting almost everything. So, if i don't use roaming profiles, what kind of data is not shared/roamed? Perhaps is not necessary and if i set roaming profiles, i will add more unnecessary complexity to the infraestructure. What do you think about? Some advice or recomendation? Thanks!

    Read the article

  • Serving static files fails - nginx

    - by Sergei
    Hi, I've been looking and trying around all night, but without success. I configured nginx to serve my static files and proxy all the other traffic: server { listen 80; server_name mydomain.com; access_log /home/boudewijn/www/bbt/brouwers/logs/access.log; error_log /home/boudewijn/www/bbt/brouwers/logs/error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/boudewijn/www/bbt/brouwers/; } } The proxy passing is no problem, but when I go to mydomain.com/media/ or try to access any testfile over there, it's without success. I paid attention to the difference between root and alias, my media folder exists, I paid attention to the trailing slashes, but still I get a 404 when trying to access my static media files. Any help?

    Read the article

  • Active Directory Password Formats

    - by Brent Pabst
    Hi, I'm working on an open source project that will manage active directory users. I am looking for feedback from Windows/Active Directory Admins on the formats of usernames they prefer or their organization uses. I want to make sure the software allows admins to use the most popular formats when new users are created. Here is the list I have so far: 1. <firstname><lastname> 2. <lastname><firstname> 3. <lastname><firstinitial> 4. <lastname><firstinitial><middleinitial> 5. <firstinitial><lastname> 6. <firstinitial><middleinitial><lastname> 7. <firstname><lastinitial> In addition how do you handle multiple identical names? So if two John Smith's exist do you append a numeric number, or interject a middle initial or name to solve the problem? Thanks for the feedback

    Read the article

  • Recommendations on managing dot files for users using Puppet

    - by Beaming Mel-Bin
    Goal is to have a collection of dot files (.bashrc, .vimrc, etc.) in a central location. Once it's there, Puppet should push out the files to all managed servers. I initially was thinking of giving users FTP access where they could upload their dot files and then having an rsync cron job. However, it might not be the most elegant or robust solution. Wanted to see if anyone else had some recommendations.

    Read the article

  • Can I create an SSH user which can access only certain directory?

    - by RiMMER
    I have a Virtual Private Server which I can connect to using SSH with my root account, being able to execute any linux command and access all the disk area, obviously. I would like to create another user account, which would be able to access this server using SSH too, but only to a certain directory, for example /var/www/example.com/ For example, imagine this user has a HUGE error.log file (500 MB) located in /var/www/example.com/logs/error.log When accessing this file using FTP, this user needs to download 500 MB to view the last lines of the log, but I'd like him to be able to execute something like this: tail error.log Therefore I need him to be able to access the server using SSH, but I don't want to grant him access to all server areas. How can I do this?

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >