Search Results

Search found 37688 results on 1508 pages for 'site search'.

Page 518/1508 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • SharePoint Saturday DC 2010 Slides, Demo Scripts, and Pictures

    - by Brian Jackett
    Wow! This past weekend I attended SharePoint Saturday Washington DC (SPSDC) which was quite an event to say the least.  For those unfamiliar, SharePoint Saturday is a community driven event where various speakers gather to present at a FREE conference on all topics related to SharePoint.  This made my fifth SharePoint Saturday attended and fourth I’ve spoken at.  SPSDC was a bit different than most SharePoint Saturdays mostly due to the scale of it.  We had almost 950 attendees, over 80 speakers presenting close to 90 sessions, and dozens of sponsors.  A big thanks goes out to the organizers of this event.  They put in a lot of hard work and time to pull this event off and should be very proud of the end result.      For SPSDC I presented “The Power of PowerShell + SharePoint 2007”.  I want to thank all of the attendees of my session for coming and asking some great questions.  Below you can find the slides and demo scripts for this session.  I also took some photos throughout the day (not as many as usual since so much going on) so check them out.  If you have any follow up questions feel free to drop me a line in the comments or the contact link at the top of the site.   Slides and Scripts Click here for the demo scripts and slides posted on my SkyDrive. VERY IMPORTANT NOTE: One thing I forgot to mention in my presentation.  In order to run code against the SharePoint API you need to load the Microsoft.SharePoint.dll assembly first.  Run the below command on the PowerShell console line to complete that:   [void][System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SharePoint") Photos Facebook album -or- My album on Windows Live site (higher res shots). View Full Album         -Frog Out

    Read the article

  • Domain changes required for SSL integration

    - by user131003
    Currently my site supports regular payment options (User is taken to Payment Gateway/PG website). Now I'm trying to implement "seamless" PG integration. I need SSL for this. I'm having a dedicated server with 5 static IPs from Hostgator/HG. options: I take SSL for www.my_domain.com. According to HG, I need to change IP of main site as current IP is not really dedicated as it is being shared by cpanel etc. So They need to bind another dedicated IP to main domain for SSL to work. This would required DNS change for main website and hence cause few hours downtime (which is ok). I've noticed that most of the e-commerce websites are using subdomains like secure.my_domain.com for ssl/https. This sounds like a better approach. But I've got few doubts in this case: a) Would I need to re-register with existing PGs (Paypal, Google Checkout, Authorize.net) if I switch to subdomain? Re-registering is not an option for me. b) Would DNS change be required for www.my_domain.com in this case. This confusion arose because of following reply from HG : "If the sub domain secure.my_domain.com is added to an existing cPanel it will use the IP for that cPanel so as long as it is a Dedicated IP that will be fine. If secure.my_domain.com gets setup as its own cPanel it will need to be assigned to a Dedicated IP which would have a DNS change involved.". Please suggest?

    Read the article

  • Is it possible to transfer a domain without a "gap" in Whois privacy protection?

    - by Guest
    I currently own several domains on which I am using a Whois privacy protection service to hide my personal details. In the near future, I would like to transfer some of these domains to a different registrar. It has been many years since I last performed domain transfers, so I am no longer knowledgeable about what it involves. However, I have read from several registrars that they ask their customers to disable Whois protection before effecting a domain transfer. Since there are several websites out there that publish archived versions of Whois information (and ask handsome money for the information to be hidden, of course), I would prefer to avoid having such a "gap" in my privacy protection. I figured that these websites would fetch Whois information mainly when a query is effected through their own website. However, I have found out that at least one of these sites had a copy of the Whois information for a new domain up on their site within hours after I registered it, so they must have some other source (of course I used a Google search to find that out, not their own site). What that tells me is that the time it takes for the domain transfers to go through would be more than enough for these rogue websites to cache my information. If my new registrar offers privacy protection for domains right from the point of registration as well, is there no way to transfer the domain between the two without reverting to my default Whois information in between?

    Read the article

  • Chrome Mobile Monthly: Responsive vs Separate Sites

    Chrome Mobile Monthly: Responsive vs Separate Sites Join us on Wednesday October 31st at 9am PT for our Monthly Mobile Web Hangout! This month +Brad Frost will be joining us to talk about responsive design versus separate mobile sites. And in keeping with the season, it's a special Presidential Smackdown Edition. The US presidential race is in full swing, and the candidates are intensely debating the country's hot-button issues. The web design world is entrenched in our own debate about how to address the mobile web: should we create a separate mobile site or create a responsive experience instead? It just so happens that the two US presidential candidates have chosen different mobile web strategies for their official websites. In the red corner is Republican candidate Mitt Romney's dedicated mobile site, while in the blue corner is incumbent president Barack Obama's responsive website. Which will prevail? Sit back, crack open a cold one, and watch the battle unfold as Brad dissect the candidates' sites to uncover best practices and common mobile web pitfalls. From: GoogleDevelopers Views: 0 0 ratings Time: 00:00 More in Science & Technology

    Read the article

  • Wordpress .htaccess preventing subfolder access

    - by John K.
    This is sort of a goofy setup, but it's not in my power to reconfigure it at this time. I'm running in a shared hosting environment. The domain is example.com. This is an add-on domain on the host side with example.com being redirected to the www/example.com sub-directory. That directory houses a standard Wordpress site which acts as the main site when you visit example.com. The .htaccess file within that directory is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^wp-admin/profile\.php$ /ssm/welcome [R] </IfModule> I have a subdirectory, at the root level with the /example.com subdirectory that houses a cake php application. That subdirectory is /tracker. My problem is that when I attempt to browse to example.com/tracker, I get a 404 from Wordpress because perma links are on. What I think I need is a rewrite rule in the Wordpress .htaccess file that short circuits the existing rewrite rules and permits example.com/tracker to work independently of the Wordpress install. Or a rewrite rule at the root level that short circuits the redirect to the /example.com directory in the first place. Not sure how well I explained that so here's a summary. The www/ directory structure: example.com/ tracker/ Add on domain of www.example.com redirecting to the /example.com directory with Wordpress and a tracker/ directory running CakePHP which I would like to access via www.example.com/tracker. If you need further info or clarification let me know!

    Read the article

  • USB mass storage couldn't get mounted

    - by revo
    It's my android phone SD card which was indicated damaged by android yesterday night, out of the blue! I put it directly to a USB port with a USB SD card holder case, so in that way I can recover it with TestDisk, which I had experienced before on a similar situation. I also noticed that there is a change in file system and capacity: File System : RAW Capacity : 0 (unknown capacity) Also TestDisk doesn't show it on its partitions list. A 2 GB SD card is not that important in price but I've a lot of files and medias which I need them. Used a mini card reader, TestDisk displayed it on its list but a quick search and or a deep search doesn't have any results No partition found or selected for recovery and then I should quit the program. Your help is appreciated. Update #2 lsusb output: Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 002: ID 04f3:0234 Elan Microelectronics Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 002: ID 058f:6366 Alcor Micro Corp. Multi Flash Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 006 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • How to handle user accounts for many sites running on same server

    - by Simon Courtenage
    Background to this question: I want to host multiple e-commerce sites on the same server, each with their own separate customer login application. Each site's login application needs to be secured by SSL. I'm unsure how best to handle this. For example, do I need to acquire a separate SSL certificate for each site (in which case, how do I do this dynamically, as the sites are created), or do I handle this using ONE login gateway-style application, which handles it on behalf of all the sites via a kind of transparent redirect? I'd be grateful for any pointers or advice. Thanks.

    Read the article

  • Installing httpssl module on a running NGINX server

    - by Rob
    Hi, New to NGINX, we inherited a project that runs Django/FCGI/NGINX on a hosted RHEL box. A requirement has come in that the site now needs to have ssl enabled. Client was pretty sure the person who had built the site had made it so they could use ssl. I backed up the conf file, added the server block for the ssl instance and tried to reload. Reload failed because it didn't recognize the ssl in this line: ssl on; Not an NGINX expert, but the David Caruso in me tells me that the server (sunglasses on) is not secure. I know that you need to configure NGINX at install with this module. If this didn't happen, how hard/risky is it to reconfigure a running nginx box with this module given that we didn't configure it in the first place.

    Read the article

  • Legal issues regarding embedding a toolbar into a browser [closed]

    - by OmarOthman
    We are in the process of developing a software that provides service to internet users and we would like to ask about the legal liabilities of some issues. Of course, everything is to be done with the consent of the user of our software but our concern is about third party tools and services that may be invoked/used by our product. In particular, these are the concerns: (1) Embedding a toolbar to an existing browser. This screenshot is an example, where the words in the highlighted toolbar are passed to www.google.com for searching, and the contents of the window are the results of the search. I want to know if any consent should be obtained before such a toolbar can be embedded in a web browser, whether there are any legal requirements by the web browser; whether different web browsers have different requirements (at least for Internet Explorer, Firefox, Chrome, Opera and Safari). (2) Invoking a free website from that toolbar (like Google’s search page). The screenshot above demonstrates such an existing toolbar. (3) Full ownership and unrestricted access to the data entered to this toolbar. In the screenshot above, I want to take the words (translation english to spanish) and own them, i.e. storing them in my database and do some processing on them. (4) Ability to track the pages entered by the user starting from that free website. In the screenshot above, you can notice that the user opted only for the third result, whose URL is translate.google.com. I want to have access to this and all URLs clicked from this page for some processing as well. This is a commercial application, so I need a very concrete, precise and reference-supported answer.

    Read the article

  • Good maintained privacy Add-On/settings set that takes usability into account?

    - by Foo Bar
    For some weeks I've been trying to find a good set of Firefox Addons that give me a good portion of privacy/security without losing to much of usability. But I can't seem to find a nice combination of add-ons/settings that I'm happy with. Here's what I tried, together with the pros and cons that I discovered: HTTPS Everywhere: Has only pro's: just install and be happy (no interaction needed), loads known pages SLL-encrypted, is updated fairly often NoScript - Fine, but needs a lot of fine-tuning, often maintained, mainly blocks all non-HTML/CSS Content, but the author sometimes seems to do "untrustworthy" decission RequestPolicy - seems dead (last activity 6 months ago, has some annoying bugs, official support mail address is dead), but the purpose of this is really great: gives you full control over cross-site requests: blocks by default, let's you add sites to a whitelist, once this is done it works interaction-less in the background AdBlock Edge: blocks specific cross-site requests from a pre-defined whitelist (can never be fully sure, need to trust others) Disconnect: like AdBlock Edge, just looking different, has no interaction possibilities (can never be fully sure, need to trust others, can not interact even if I wanted to) Firefox own Cookie Managment (block by default, whitelist specific sites), after building own whitelist it does it's work in the background and I have full control All These addons together basically block everything unsecure. But there are a lot of redundancies: NoScript has a mixed-content blocker, but FF has it's own for a while now. Also the Cookie blocker from NoScript is reduntant to my FF-Cookie setting. NoScript also has an XSS-blocker, which is redundant to RequestPolicy. Disconnect and AdBlock are extremly redundant, but not fully. And there are some bugs (especially RequestPolicy). And RequestPolicy seems to be dead. All in all, this list is great but has these heavy drawbacks. My favourite set would be "NoScript Light" (only script blocking, without all the additonal redundant-to-other-addons hick-hack it does) + HTTPS Everywhere + RequestPolicy-clone (maintained, less buggy), because RequestPolicy makes all other "site-blockers" obsolete (because it blocks everything by default and let's me create a whitelist). But since RequestPolicy is buggy and seems to be dead I have to fallback to AdBlock Edge and Disconnect, which don't block all and and need more maintaining (whitelist updates, trust-check). Are there addons that fulfill my wishes?

    Read the article

  • Frustration with superuser.com user interface (please migrate this to "meta")

    - by Randolf Richardson
    Can someone please migrate this question to "meta" for me? I'm unable to post there while I wait for OpenID to fix my password problems. Thanks. I'm having problems with superuser.com's interface -- when I provide an answer to a question, sometimes the buttons get locked and then I find out that the question was migrated. Usually I can go back and copy-and-paste my answer at whatever site it goes to, but on occasion my answer is lost and I have to re-type it. This is very time-consuming, and makes it quite frustrating to use the system. In addition, I find that I'm wasting a lot of time dealing with having to re-register on the other sites. My suggestion is to not de-activate the "Submit answer" button but to just forward that along to the migrated site automatically, thus ensuring that answers that people put a lot of effort into don't get lost. Thanks in advance.

    Read the article

  • Mini keyboard has no home/end keys; how to type them?

    - by Steve Crane
    Some months ago I needed a small keyboard and bought an Okion KM229 without noticing that it has no Home or End key. This makes it tricky to type as I'm so used to using these keys. I haven't yet figured out if there is a key combination that issues Home and End keystrokes. Does anyone have experience of these keyboards and know how to issue those keystrokes? The keyboard is used on a PC running Windows XP. I have used the contact form on the Okion USA web site to ask this question but received no response. Wikipedia suggests that Home and End keystrokes are issued with Fn-Left and Fn-Right on some limited size keyboards. I may have tried those but don't remember for sure. I will try them again though when I am on site again in a week or so. Any other thoughts would be welcomed in the meantime.

    Read the article

  • Should I start MCPD training now or wait for new exams?

    - by lunchmeat317
    i apologize if this question has been asked before, or if this is the wrong place to put it. I'm beginning my study track for the MCPD certification in Web Development. However, Microsoft plans to retire this certification on July 31st of 2013, along with two of the necessary tests to receive the certification. On MS's site, I can't find a newer certification path to take - I imagine that Microsoft will release new certification paths and new tests for their new software, but I don't know when that will happen. I don't really know anything about Microsoft's process, as this is the first Microsoft certification I'll be studying for. The bottom line is this - I don't want to lose six months waiting for a new test to appear that won't expire, but I don't want to rush to get a certification that will be invalid in six months (or have to reset any progress due to new study material). To those with experience in affairs like this - what is the best course to take, and can I maximize the time I have now (not wait for new testing material)? Is there any way to find material for the new tests that Microsoft will be rolling out? Thank you for your patience. If this is the wrong place to put this question, I would like to request that it be moved to the correct StackExchange site instead of being closed. Thanks for your help!

    Read the article

  • Nginx no static files after update

    - by SomeoneS
    First, i must say that i am not expert in server administration, my site was setup by hosting admins (that i cannot contact anymore). Few days ago, i updated Nginx to latest version (admin told me that it is safe to do). But after that, my site serves only html content, no CSS, images, JS. If i try to open some image i get message "Wellcome to Nginx" (same thin if i try to open static.mysitedomain.com). More details: Site has static. subdomain, but static files are in same directory as they used to be before setting up static files. I was googling for some solutions, i tried to change something in /etc/nginx/, but no luck. I feel that this is some minor configuration problem, any ideas? EDIT: Here is /etc/nginx/nginx.conf file content: user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Here is /etc/nginx/sites-enabled/default file content: server { #listen 80; ## listen for ipv4; this line is default and implied #listen [::]:80 default ipv6only=on; ## listen for ipv6 root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; deny all; } # Only for nginx-naxsi : process denied requests #location /RequestDenied { # For example, return an error code #return 418; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/www; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ /index.html; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ /index.html; # } #}

    Read the article

  • Windows 2008 IIS 7.0 HTTP to HTTPS Redirect -- Versus IIS 6.0 Mechanism

    - by Dan7el
    This topic, creating a mechanism for redirection from HTTP to HTTPS on a Windows 2008 server running IIS 7.0 is a much written-about topic on the Internet. How this is done is really not so much my issue. My issue is more of explaining why this can't be done with the standard HTTP Redirect module that ships with Windows 2008 IIS 7.0. Instead, there are other methods needed that are more arduous. First, the IIS 6.0 method requires no externally available modules nor does it require any additional modifications to the web.config or any type of other development effort. It's outlined here: http://blogs.microsoft.co.il/blogs/dorr/archive/2009/01/13/how-to-force-redirection-from-http-to-https-on-iis-6-0.aspx And, you can see the basic steps are to run the snap-in, get the properties on the site, and do some modifications. Presto, you have the HTTP -- HTTP redirect setup. Now, on the IIS 7.0 platform, it doesn't seem this simple. An initial search found the following site: http://www.sslshopper.com/iis7-redirect-http-to-https.html Which has two separate approcates: 1. Involves installing a separately available Microsoft module -- URL Rewrite Module, and then adding XML to the web.config. 2. Custom Error Page. ...there might be other methods, but these are the basic ones and the first is listed as the primary method. But wait...There exists on the IIS 7.0 an HTTP Redirect Module. So...why can't I use the HTTP Redirect Module to do this very thing? This is really my big question. I need to know this because my management is going to insist I use the HTTP Redirect Module and set up the HTTP to HTTPS redirect in a similar fashion to how we do in IIS 6.0. Can someone please explain to me, in clean, simple, easy to understand, terms that both I and my management can understand as to why I need to go get the URL Rewrite Module and install that on the server and make the web.config changes suggested by the article instead of simply using the HTTP Redirect module that's already installed on the site? Thanks a bunch.

    Read the article

  • Program or Firefox plugin to automatically manipulate website HTML using specific rules?

    - by Rookie
    I have noticed dozens of times how tedious some websites are. Therefore, I need a program, plugin for Firefox, or whatever that does the job - to be able to add some sort of checks to specific websites. For example, the program could search for a regexp pattern, and then do an action according to it. For example if I find some language from the wikipedia page, I would like to move or copy it on top of that languages list. The action wouldn't necessarily have to be applied on the regexp it found: it could issue another regexp search, and if it's found, it would do the actions I want, such as delete a block of other piece of HTML, move it, or copy it to another location.

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • Reg Expression htaccess RewriteRule

    - by Rick
    I am new to using regular expressions for rewriting URL's in htaccess I need to redirect mysite.com/123 to mysite.com/, IF cookie named 'ref' is set. my current htaccess is: <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{HTTP_COOKIE} ref=true [NC] RewriteRule ^/([0-9]+)/$ http://www.mysite.com </IfModule> The goal is that when someone enters site with: mysite.com/111(some number) that they are redirected to the home page of the site after the cookie is set. Be nice... I'm new! ;o)

    Read the article

  • Need help with cybersquatting complaint: can a domain name forward AND resolve at same time? [on hold]

    - by Alan
    Probably a silly question for you pros... but for this novice here, I just want to make sure my understanding is correct. Context: I am trying to prove that a domain name owner has been cybersquatting and has never used the domain name in question. There are 4 shots from WayBackMachine over a three-year period that show the domain name resolving to a basic server index page with either no files or a single cgi-bin folder. The domain name owner claims, however, that the domain name was forwarded over the entire time from to another website, and that these captures probably coincided with occasional "outages." It is my understanding that: a) domain name forwarding is binary: if a domain name is forwarded to a valid site, it cannot simultaneously resolve to a valid IP address. Is this correct? b) domain name forwarding is not subject to "outages": servers can have outages, and websites can be down, but the forwarding itself cannot be down, as this is simply a pointer. (Or, the entire registrar where the DNS settings are hosted would have to malfunction. Is this correct? FINALLY, bonus question for pro webmasters: What is the likelihood that the WayBackMachine would capture the domain name on just those occasions when the webmaster disabled forwarding to supposedly work on the new site? Mucho thanks in advance!

    Read the article

  • What's the best way to monitor a large number of application pools in IIS7?

    - by Kev
    Some background first - We're running IIS 7 on Windows 2008. We're running around 250 websites per server with each site in it's own application pool. I need a way to monitor each application pool for crashes and hangs and to send an email alert if an application pool is unresponsive for more than say 2 minutes. I thought about having a virtual directory mapped into each site with an ASP.NET page that we could poll via our existing monitoring system (HostMonitor). Does anyone else have experience in this area?

    Read the article

  • Can you add doubleclick macros to exisiting ads

    - by picus
    Setup: A few weeks back I made some very simple html5 "ads" to run on a few of our partner sites. They weren't paid ads as we also manage these sites, however there are a few of them, so I made a modular solution that is hosted on one of our web servers and included on each page via javascript which outputs an iframe. Each search (ad has a search box) or click appends a url param that we track using custom vars in Google Analytics. In essence, the ad is a HTML page served in an iframe via javscript. Problem: We have an opportunity to run these ads on a third party site, I had sent them a brief how-to for inserting them and they came back saying that: The creative code doesn't contain the %u macro. We can’t substitute the default click-through URL without it. I am somewhat familiar with doubleclick from a web developer's POV, i have inserted DC dart tags before and even have implemented the ad tool for publishers. I have not, however, actually ever created an ad for the doubleclick network before. I assume the publisher needs these tags to track clicks and hence charge us. However, they have not responded to me in regards to these questions. Are macros something I can just add to or replace the existing links with, or do I need to completely setup the ad with doubleclcik - a big issue in the short term given we do not have a advertiser's account set up with them. Thanks in advance

    Read the article

  • How can I receive more traffic? My VPS fails!!!

    - by qtrix
    I have a web site - photo gallery. About 400 photos. Site on Gallery 3. mySQL. Hosted on VPS from myhosting.com (CPU 1792 MHz, 2048 MB RAM). Everything seems to be ok, but there is one big problem. Once traffic reaches ~ 20 people (online) - website start loading really really slow. Actually website can't be loaded about 30-60 sec. What should I do? Buy more RAM / CPU on the same VPS? Move to a dedicated server or maybe myhosting.com just sucks? What do you recommend?

    Read the article

  • Refactoring and Open / Closed principle

    - by Giorgio
    I have recently being reading a web site about clean code development (I do not put a link here because it is not in English). One of the principles advertised by this site is the Open Closed Principle: each software component should be open for extension and closed for modification. E.g., when we have implemented and tested a class, we should only modify it to fix bugs or to add new functionality (e.g. new methods that do not influence the existing ones). The existing functionality and implementation should not be changed. I normally apply this principle by defining an interface I and a corresponding implementation class A. When class A has become stable (implemented and tested), I normally do not modify it too much (possibly, not at all), i.e. If new requirements arrive (e.g. performance, or a totally new implementation of the interface) that require big changes to the code, I write a new implementation B, and keep using A as long as B is not mature. When B is mature, all that is needed is to change how I is instantiated. If the new requirements suggest a change to the interface as well, I define a new interface I' and a new implementation A'. So I, A are frozen and remain the implementation for the production system as long as I' and A' are not stable enough to replace them. So, in view of these observation, I was a bit surprised that the web page then suggested the use of complex refactorings, "... because it is not possible to write code directly in its final form." Isn't there a contradiction / conflict between enforcing the Open / Closed Principle and suggesting the use of complex refactorings as a best practice? Or the idea here is that one can use complex refactorings during the development of a class A, but when that class has been tested successfully it should be frozen?

    Read the article

  • Batch file to uninstall all Sun Java versions?

    - by Ricket
    I'm setting up a system to keep Java in our office up to date. Everyone has all different versions of Java, many of them old and insecure, and some dating back as far as 1.4. I have a System Center Essentials server which can push out and silently run a .msi file, and I've already tested that it can install the latest Java. But old versions (such as 1.4) aren't removed by the installer, so I need to uninstall them. Everyone is running Windows XP. The neat coincidence is that Sun just got bought by Oracle and Oracle has now changed all the instances of "Sun" to "Oracle" in Java. So, I can conveniently not have to worry about uninstalling the latest Java, because I can just do a search and uninstall all Sun Java programs. I found the following batch script on a forum post which looked promising: @echo off & cls Rem List all Installation subkeys from uninstall key. echo Searching Registry for Java Installs for /f %%I in ('reg query HKLM\SOFTWARE\microsoft\windows\currentversion\uninstall') do echo %%I | find "{" > nul && call :All-Installations %%I echo Search Complete.. goto :EOF :All-Installations Rem Filter out all but the Sun Installations for /f "tokens=2*" %%T in ('reg query %1 /v Publisher 2^> nul') do echo %%U | find "Sun" > nul && call :Sun-Installations %1 goto :EOF :Sun-Installations Rem Filter out all but the Sun-Java Installations. Note the tilda + n, which drops all the subkeys from the path for /f "tokens=2*" %%T in ('reg query %1 /v DisplayName 2^> nul') do echo . Uninstalling - %%U: | find "Java" && call :Sun-Java-Installs %~n1 goto :EOF :Sun-Java-Installs Rem Run Uninstaller for the installation MsiExec.exe /x%1 /qb echo . Uninstall Complete, Resuming Search.. goto :EOF However, when I run the script, I get the following output: Searching Registry for Java Installs 'DEV_24x6' is not recognized as an internal or external command, operable program or batch file. 'SUBSYS_542214F1' is not recognized as an internal or external command, operable program or batch file. And then it appears to hang and I ctrl-c to stop it. Reading through the script, I don't understand everything, but I don't know why it is trying to run pieces of registry keys as programs. What is wrong with the batch script? How can I fix it, so that I can move on to somehow turning it into a MSI and deploying it to everyone to clean up this office? Or alternatively, can you suggest a better solution or existing MSI file to do what I need? I just want to make sure to get all the old versions of Java off of everyone's computers, since I've heard of exploits that cause web pages to load using old versions of Java and I want to avoid those.

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >