Search Results

Search found 47679 results on 1908 pages for 'web admin'.

Page 235/1908 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • How can a large company foster excellence in its engineers?

    - by Joshiatto
    I am tasked with improving the skills (quality & speed) of engineers in my company. Here are some ideas: Pair Programming TDD Automated Check-in Policies Talks given by experts Awards for coding excellence Encourage competition among engineers to contribute to GitHub Publish standards and practices docs on the intranet site "Gamification" of engineering. Somehow make becoming badasses into a game they will enjoy playing Training Showcase github checkins on screens around the office Add an "engineer of the month" to the intranet home page How can I drive traffic to the intranet home page? What crazy futuristic idea would drive engineers to go to the page every day to see who of their peers are making more money than them (inferred via recognition) and then go off and improve their skills and productivity to see their standings improve on the home page??? Or any ideas just to foster collaboration and love for their jobs so they start taking more pride in their work?? Don't take my ideas as symptomatic of our org. I take full responsibility for not knowing the right way to do this.

    Read the article

  • Which Browser is the Best to Use When Running Your Laptop on Battery Power?

    - by Asian Angel
    Squeezing the maximum amount of usage time out of your laptop battery can be challenging at times…it all depends on the software you are using. One software we are all likely to be using is a browser to keep up with our online lives… If your laptop is older, then getting the most out of your laptop’s aging battery is definitely a must. The good folks over at the 7 Tutorials blog have done a comparison test to see which browser is the gentlest on your laptop’s battery and the results may surprise you. You can view the results by visiting the link below… Had better (or worse) luck with one of the browsers tested? Then make sure to share the results with your fellow readers in the comments! Test Comparison: Which Browser Will Make Your Laptop’s Battery Last Longer? [7 Tutorials] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Sub routing in a SPA site

    - by Anders
    I have a SPA site that I'm working on, I have a requirement that you can have subroutes for a page view model. Im currently using this 'pattern' for the site MyApp.FooViewModel = MyApp.define({ meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo" }, init: function (queryResult) { }, prototype: { } }); In the master view model I have a route table this.navigation(new MyApp.RoutesViewModel({ Home: { model: MyApp.HomeViewModel, route: String.empty }, Foo: { model: MyApp.FooViewModel } })); The meta object defines which query should populate the top level view model when its invoked through sammyjs, this is all fine but it does not support sub routing My plan is to change the meta object so that it can (optional offcourse) look like this meta: { query: MyApp.Core.Contracts.Queries.FooQuery, title: "Foo", route: { barId: MyApp.BarViewModel } } When sammyjs detects a barId in the query string the Barmodel will be executed and populated through its own meta object. Is this a good design?

    Read the article

  • Should I avoid SharePoint Development in Visual Studio?

    - by SaphuA
    Not long ago I started an internship at a company that supplies SharePoint consultancy, hosting and development. While their consultancy seems to be pretty good and solid, I feel their development department lacks direction. The reason for this, most likely, is that they stopped outsourcing not too long ago. One thing that I've frequently bumped my head into is the following: My supervisor strongly insists that everything that can be done natively in SharePoint (somehow this includes editing xslt files in Designer) should be done in SharePoint. Even if this results in longer development time (at least when they make me write XSLT) and reduced usability. Her main arguments for this are: Better maintainability Editing the functionality doesn't require programming knowledge I feel the company is a little biassed and I am unable to get a decent discussion going. This is why I am looking for other places to get some responses on the subject (and not only on the arguments of my supervisor, but more on the subject in general). Kind regards

    Read the article

  • Software Design for Product Verticals and Service Verticals

    - by Rachel
    In every industry there are two verticals Product Vertical and Service Vertical, so my question is: How does design approach changes while designing Software for Product Vertical as compared to developing Software for Service Vertical ? What are the pros and cons for each case ? Also, in case of Product Vertical, How you go about designing Product or Features and what are steps involved ? Lastly, I was reading How Facebook Ships Code article and it appears that Product Managers have very little influence on how Product is developed and responsibility lies mainly with the Developer for the feature. So is this good practice and why one would go for this approach ? What would be your comment on this kind of approach ?

    Read the article

  • Is the term "web portal" obselete?

    - by John Hamelink
    Firstly, sorry if this is the wrong place: I've looked at all the programming-related boards and this one seems like the best fit - correct me if I'm wrong. My boss uses the term "portal" for the project I work on all the time. To me, the word makes me think of Yahoo in the late 90s. Does the word "portal" have old-school connotations, or is it just me? Do you think it's ok to use it, and not drag our client's perception of the product down into the middle-ages? Or again, am I just being weird?

    Read the article

  • Implement service layer in MVC

    - by Dan H
    We have a defined service layer hosted in WCF. We are now building a website that will need to use the services functionality. The website is being written in ASP.NET MVC 4 and I'm trying to decide how to reference the WCF service from the MVC app. It's a large complex website and it will be changing on a weekly basis. My first reaction is to abstract out the service references (About 7 services on this one WCF host) and create a service ref facade library with which the website interacts. But, I don't know exactly how to use the service facade in MVC. I'm starting to think the Models will be responsible for it because when the controller gets a model, that model should call the service (if needed) and return what the controller asked. I'm trying to avoid having the MVC app know details of the service references. So, I could have a model factory that creates whatever model the controllers need and they can use the service facade to accomplish it. Is this a good plan, or am I off track?

    Read the article

  • File Upload Forms: Security

    - by Snow_Mac
    SO I'm building an application for uploading files. We're paying scientists to contribute information on pests, diseases and bugs (for Plants). We need the ability to drag and drop a file to upload it. The question becomes since the users will be authicentated and setup by us, will it be necessarcy to include a virus scanner to prevent the uploading and insertition of malicious files. How important is this?

    Read the article

  • Architecture for interfacing multiple applications

    - by Erwin
    Let's say you have a Master Database and a few External/Internal applications that use WebServices to interface data. What would be your preferred architecture to interface data from and to those applications? Would you put some sort of Enterprise Service Bus in between? Like BizTalk? Or something cheaper? We don't want to block applications while they are interfacing, but we do want to use return codes from the interfaces to determine if we need to take some actions in the originating application or not.

    Read the article

  • JSF 2.2 recent progress - Early Draft

    - by alexismp
    JSF specification lead Ed Burns has an update on the progress of JSF 2.2, another component which should be required as part of the upcoming Java EE 7 standard. This includes a reminder of the scope of this specification, the availability of the early draft and height specific features that are being worked on and split into "Mostly Specified Features" and "Not Yet Fully Specified Features" (I think you can read the latter as "at risk"). My favorite is "763-EverythingIsInjectable". Remember that JSF 2.2 is due out in the middle of 2012 which is in time to be integrated in the Java EE 7 platform JSR (currently scheduled for second half of 2012). In the mean time, JSF 2.2 nightly builds are available.

    Read the article

  • What bots are really worth letting onto a site?

    - by blunders
    Having written a number of bots, and seen the massive amounts of random bots that happen to crawl a site, I am wondering if the goal of the site allowing bots is for the potential for the bot to send real traffic back to the site if there is any reason to allow bots that are not known to be sending real traffic back, and how to spot these "good" bots; based on how they ID themselves, IPs they come from, behaviors, etc.

    Read the article

  • Google Webmaster Tools, DNS Errors & HostPapa

    - by Gravy
    Received a message from Google Webmaster Tools: Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 40.0%. You can see more details about these errors in Webmaster Tools. Recommended action Contacted HostPapa and they deny that there is any issue with the site / server!!! Support in terms of what I can do to actually resolve this issue is non-existent!!!! The site is currently online. And I don't know much about DNS... so any advice about what I can do to resolve this problem would be much appreciated. Basically, the message from Google says that it is my webhosts fault, the message from my webhost (HostPapa) is... "Just tell google to crawl your site as there are no errors."

    Read the article

  • What management/development practices do you change when a team of 1-3 developers grows to 10+?

    - by Mag20
    My team built a website for a client several years ago. The site taffic has been growing very quickly and our client has been asking us to grow our team to fill their maintenance and feature request needs. We started with a small number of developers, and our team has grown - now we're in the double digits. What management/development changes are the most beneficial when team grows from small "garage-size" team to 10+ developers?

    Read the article

  • Inline Image in ASP.NET

    - by Ricardo Peres
    Inline images is a technique that, instead of referring to an external URL, includes all of the image’s content in the HTML itself, in the form of a Base64-encoded string. It avoids a second browser request, at the cost of making the HTML page slightly heavier and not using cache. Not all browsers support it, but current versions of IE, Firefox and Chrome do. In order to use inline images, you must write the img element’s src attribute like this: 1: <img src="data:image/gif;base64,R0lGODlhEAAOALMAAOazToeHh0tLS/7LZv/0jvb29t/f3//Ub/ 2: /ge8WSLf/rhf/3kdbW1mxsbP//mf///yH5BAAAAAAALAAAAAAQAA4AAARe8L1Ekyky67QZ1hLnjM5UUde0ECwLJoExKcpp 3: V0aCcGCmTIHEIUEqjgaORCMxIC6e0CcguWw6aFjsVMkkIr7g77ZKPJjPZqIyd7sJAgVGoEGv2xsBxqNgYPj/gAwXEQA7" 4: width="16" height="14" alt="embedded folder icon"/> The syntax is: data:[<mediatype>][;base64],<data> I developed a simple control that allows you to use inline images in your ASP.NET pages. Here it is: 1: public class InnerImage: Image 2: { 3: protected override void OnInit(EventArgs e) 4: { 5: String imagePath = this.Context.Server.MapPath(this.ImageUrl); 6: String extension = Path.GetExtension(imagePath).Substring(1); 7: Byte[] imageData = File.ReadAllBytes(imagePath); 8:  9: this.ImageUrl = String.Format("data:image/{0};base64,{1}", extension, Convert.ToBase64String(imageData)); 10:  11: base.OnInit(e); 12: } 13: } Simple, don’t you think?

    Read the article

  • CRM + Invoicing/Billing + Ticketing for a small web design company

    - by Mike
    Hi everyone, I am currently using ActiveCollab but it lacks the typical CRM features. I can't even keep notes about a customer saved in one place. What I am looking for is a simple but efficient CRM application that allows me to store all the (potential) customers along with their phone calls noted down, contracts, agreements. On the billing end, I should be able to keep track of invoices and payments, along with a bit of sales reports. A great extra would be a ticket support feature but not really necessary I looked at VTiger and SugarCRM at first. Though, they look too complex on the sales/campaigns end but completely lack the billing side. Do you have some good apps/services to suggest? :) Any programming language or OS would do. Both paid and free. Thanks Mike

    Read the article

  • Web Based CRM For Banks.

    Banks have to make several transactions in a day; buyers have to give their email, phone numbers, address, names, social security number and credit card information. Huge amount of information is pro... [Author: James Wong - Computers and Internet - March 29, 2010]

    Read the article

  • dynamic urls and links on one web page

    - by John
    I am trying to figure out how to create dynamic links and urls on a static webpage. What I want to do is the following: I have a single webpage for example: MYWEBPAGEdotCOM/INDEX.HTML that will always look the same, except for one link on the page. the link would be on the page for example: LINK TO AFFILIATE: affiliatedotCOM/my-affiliate_code_here_DYNAMIC_REFERER the only thing would change is the "DYNAMIC_REFERER" with every dynamic url on this page: MYWEBPAGEdotCOM/INDEX.PHP_id=test1 MYWEBPAGEdotCOM/INDEX.PHP_id=test2 MYWEBPAGEdotCOM/INDEX.PHP_id=test3 MYWEBPAGEdotCOM/INDEX.PHP_id=test4 which would only hange the dynamic link on the page to: affiliatedotCOM/my-affiliate_code_here_test1 affiliatedotCOM/my-affiliate_code_here_test2 affiliatedotCOM/my-affiliate_code_here_test3 affiliatedotCOM/my-affiliate_code_here_test4 Can someone tell me how I could go about doing this? I just dont want to have to make 100's of pages, as this would prevent me from having to do so.

    Read the article

  • Is dynamic HTML layout good from an SEO perspective?

    - by sll
    Just wondering whether dynamically built HTML layout is fine from SEO perspectives? So let's assume e-commerce engine and its most popular page - products catalog. So 90% of the page is built using AJAX and MVVM library knockoutjs which builds HTML on the fly on the client side. So how search bots would parse such content? Is it fine indexed and would be such effective as server-side built HTML pages from the SEO perspectives?

    Read the article

  • Robots.txt practices with .htaccess redirections (inherits)

    - by Jayhal
    I have a question regarding how to write robots.txt files for many domains and subdomains with redirects in place. We have a hosting account that enacts primary and add-on domains. All of our domains and subdomains, including the primary domain, is redirected via htaccess 301s to their own subdirectories in the primary domain's root directory. I'm confused about how I would write the robots.txt for certain directories. First, I wanted to confirm I am right in understanding that for domains and subdomains, crawlers will look to the directory that acts as that urls root directory for the crawling rules(robots.txt). Also, that a directory will not be affected by a robots.txt present in their parent directory if the directory has its own domain/subdomain, and that url is the one being accessed by crawlers. (Am pretty sure, but I wanted to confirm I didnt have a fundamentally flawed understanding of robots.txt) In the original root directory on the account(where the primary domain was directed before htaccess was put in place) what should the robots.txt contain? When crawlers look to crawl our primary domain, will they look to the original root directory for the robots.txt or will they reference the file contained in the new subdirectory where all the primary domain's site files are located? If so, what should the root's robot.txt include if anything at all. Would I be right to include a simple 'disallow: /' for all agents, and then include more specific robots.txt files in each subdirectory with more specific instructions. Would that affect the crawling of the directory where the primary domain is now redirected? Any help is greatly appreciated, Thanks!

    Read the article

  • Useful code paste site tools

    - by acidzombie24
    I know there are site tools to check if your webpage is alive, has compression, etc but lets not get into that. What are useful sites to paste code in and to share links of? The three i know are http://codepad.org/ shows source and runs code online http://www.pastie.org/ share source with syntax highlighting http://jsfiddle.net/ great for JS help or for the occasional test. What else do you know of? One answer per question. I'll let lints and validators slide since you do paste code into them. Mention a weakness if you do know one so others wont be surprised or disappointed.

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • SSRS/Sharepoint - Reports made in Report Builder not being list in Sharepoint Web Part

    - by Greg_the_Ant
    I followed the steps here to integrate reporting services with sharepoint in native mode. I made a page in Sharepoint with the report explorer web part and everything is working. The issue is when I create a report with the web based report builder tool, it will show up in the report manager page, but not show up in the report explorer web part on the share point page. New reports I upload using report manager do show up. Does anyone have any ideas? I'm really stuck.

    Read the article

  • Can AJAX in a CMS slow down your server

    - by Saif Bechan
    I am currently developing some plugins for WordPress, and I was wondering which route to take. Let's take an example, you want to display the last 3 tweets on your page. Option 1 You do things the normal way inside WordPress. Someone enters the website, while generating the page, you fetch the tweets in php via the twitter api, and just display them where you want. Now the small problem with this is, that you have to wait for the response from twitter. This takes a few ms. NO real problem, but this is question is just out of curiosity. Option 2 Here you don't do anything in WordPress on the initial load, but you do have the API inside. Now you just generate the page, and as soon as the page is done on the client side, you do a small AJAX call back to the server via a WordPress plugin, to fetch your latest tweets. Also called asynchronously. Now the problem with this IMO is that you have much more stress on your server. For starters you have two HTTP requests instead of one. Secondly the WordPress core has to load two times instead of one. Other options Now I know there are a lot of other options: 1) Getting the tweets directly via javascript, no stress on the server at all. 2) Cache the tweets so they are fetched from the DB instead of using the API every time. 3) Getting the tweets from an ajax call that is not a WordPress plugin. 4) Many more. My Question Now my question is if you only compare 1 and 2, which would be a better choice.

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >