Search Results

Search found 12909 results on 517 pages for 'domain'.

Page 398/517 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Why don't I have a loop error with these redirects?

    - by byronyasgur
    I know this may seem a bit of a question in reverse, but I actually don't seem to have a problem I just want to make sure before I proceed. I have 2 domains domain1.com and domain2.com and a directory my_directory at domain2.com. I have domain2.com setup as an "add on domain" in the cpanel account of domain1.com so that when I go to domain2.com I am taken to domain1.com/my_directory but the browser shows domain2.com in the addressbar so it looks and acts like and is a separate site. However when people browse to domain1.com/directory I want the address bar to show domain2.com not domain1.com/directory. So I put a redirect in the htaccess file to redirect domain1.com/directory to domain2.com and it works perfectly, but I think it shouldnt and I'm worried I've done something wrong. My question is this: domain2.com was already redirected to domain1.com/directory in the first place (I see the redirect in my cpanel under addon domains) so by adding the second redirect in the htaccess file I should be creating a loop! Could somebody please set my mind at rest and show me why not?

    Read the article

  • Removing 301 redirect from site root

    - by Jon Clements
    I'm having a look at a friends website (a fairly old PHP based one) which they've been advised needs re-structuring. The key points being: URLs should be lower case and more "friendly". The root of the domain should be not be re-directed. The first point I'm happy with (and the URLs needed tidying up anyway) and have a draft plan of action, however the second is baffling me as to not only the best way to do it, but also whether it should be done. Currently http://www.example.com/ is redirected to http://www.example.com/some-link-with-keywords/ using the follow index.php in the root of the Apache2 instance. <?php $nextpage = "some-link-with-keywords/"; header( "HTTP/1.1 301 Moved Permanently" ); header( "Status: 301 Moved Permanently" ); header("Location: $nextpage"); exit(0); // This is Optional but suggested, to avoid any accidental output ?> As far as I'm aware, this has been the case for around three years -- and I'm sorely tempted to advise to not worry about it. It would appear taking off the 301 could: Potentially affect page ranking (as the 'homepage' would disappear - although it couldn't disappear because of the next point...) Introduce maintainance issues as existing users would still have the re-directed page in their cache Following the above, introduce duplicate content Confuse Google/other SE's as to what the homepage actually is now I may be over-analysing this but I have a feeling it's not as simple as removing the 301 from the root, and 301'ing the previous target to the root... Any suggestions (including it's not worth it) are sincerely appreciated.

    Read the article

  • Set up Work Manager Shutdown Trigger in WebLogic Server 10.3.4 Using WLST

    - by adejuanc
    WebLogic Server's Work Managers provide a way to control work and allocated threads. You can set different scheduling guidelines for different applications, depending on your requirements. There is a default self-tuning Work Manager, but you might want to set up a custom work manager in some circumstances: for example, when you want the server to prioritize one application over another when a response time goal is required, or when a minimum thread constraint is needed to avoid deadlock. The Work Manager Shutdown Trigger is a tool to help with stuck threads in which will do the following: Shut down the Work Manager. Move the application to Admin State (not active). Change the Server instance health state to failed. Example of a Shutdown Trigger set on the config.xml for your domain: <work-manager>   <name>stuckthread_workmanager</name>   <work-manager-shutdown-trigger>     <max-stuck-thread-time>30</max-stuck-thread-time>     <stuck-thread-count>2</stuck-thread-count>   </work-manager-shutdown-trigger> </work-manager> Understand that any misconfiguration on the Work Manager can lead to poor performance on the server. Any changes must be done and tested before going to production. How can one create a WorkManagerShutdownTrigger for WLS 10.3.4 using WLST? You should be able to create a WorkManagerShutdownTrigger using WLST by following these steps: edit() startEdit() cd('/SelfTuning/mydomain/WorkManagers') create('myWM','WorkManager') cd('myWM/WorkManagerShutdownTrigger') create('myWMst','WorkManagerShutdownTrigger') cd('myWMst') ls()

    Read the article

  • How do you motivate peers to become better developers?

    - by Brian Rasmussen
    In my experience there seems to be two kinds of developers (if we simplify matters a great deal of course). On the one hand we have the developers, who may do a perfectly acceptable job, but who do not really care about the computer science part of their craft. They usually know few languages / technologies and are happy to let things stay that way. For whatever reason, they don't try to improve their computer science skills unless this is required in their current position. On the other hand, we have the geeks or the pragmatic programmers if you subscribe to that idea. They play around with other languages and technologies and usually have knowledge about several topics outside the technical domain of their current job. I would like to see more developers, who are enthusiastic about software development. If you share this point of view, what do you do to push your peers in that direction? Edit: follow-up question inspired by one of the answers: As non-managers, should we really care about this? And why/why not?

    Read the article

  • Country specific content vs global content

    - by Ando
    I have a global product presentation website myproduct.com For certain countries I also own the country domain: myproduct.co.uk, myproduct.com.au, myproduct.es, myproduct.de, etc. The presentation website is translated in multiple languages and I set up redirects: myproduct.es will redirect to myproduct.com/es/, myproduct.de will redirect to myproduct.com/de/, etc. . The content so far is the same, just translated in different languages. The advantages are that it's easy to keep the content aligned - everything is managed from one centralized dashboard (I'm using Wordpress with qtranslate). Now I'm running into trouble as for different countries I want localized content - for UK I want to run different promotions and use a different reseller than for .com.au so I would like that users coming from myproduct.co.uk see something different than those coming from myproduct.com.au (and not be redirected to myproduct.com as they are right now). How can I achieve this? I could duplicate the whole main website and modify only certain parts but then I would have a lot of duplicate content (e.g. info about how the product works) and I would have pages that are likely to change (FAQ page) that I would have to keep updated over all websites. I can duplicate only partially the main website: on the localized website I would have only the pages that are different and then all other links would point to the .com site. This would solve the duplication problem but would cause confusion for the user as you would navigate from .co.uk to .com without noticing and then wonder how to get back. Other, better option?

    Read the article

  • Best way to setup multiple sites' emails in my Gmail

    - by John
    I've a dozen sites and I want all of their emails come to my one gmail id and I want to reply centrally from Gmail only. I've also added all of those emails in "send email as:" list in Gmail. I could add email forwarders in my Cpanel but in that case I'll not be able to send email whose inboxes haven't been created( for example [email protected]). If I create email account then I'd receive emails in my inbox as well as forwared by the forwarder( to my gmail id). Otherwise I can setup Gmail for my domain. But for a dozen emails I'm not sure if that'd be fine. I see in http://www.google.com/enterprise/apps/business/pricing.html that for up to 10 emails it is free. But then to send email from webhosting the php code will need SMTP login details and leaving my important gmail account details in my webhosting account is very risky given my sites have been compromised twice. What is the best way to centralize all my emails so that I can read/reply/search from single place?

    Read the article

  • Mocking concrete class - Not recommended

    - by Mik378
    I've just read an excerpt of "Growing Object-Oriented Software" book which explains some reasons why mocking concrete class is not recommended. Here some sample code of a unit-test for the MusicCentre class: public class MusicCentreTest { @Test public void startsCdPlayerAtTimeRequested() { final MutableTime scheduledTime = new MutableTime(); CdPlayer player = new CdPlayer() { @Override public void scheduleToStartAt(Time startTime) { scheduledTime.set(startTime); } } MusicCentre centre = new MusicCentre(player); centre.startMediaAt(LATER); assertEquals(LATER, scheduledTime.get()); } } And his first explanation: The problem with this approach is that it leaves the relationship between the objects implicit. I hope we've made clear by now that the intention of Test-Driven Development with Mock Objects is to discover relationships between objects. If I subclass, there's nothing in the domain code to make such a relationship visible, just methods on an object. This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I can't figure out exactly what he means when he says: This makes it harder to see if the service that supports this relationship might be relevant elsewhere and I'll have to do the analysis again next time I work with the class. I understand that the service corresponds to MusicCentre's method called startMediaAt. What does he mean by "elsewhere"? The complete excerpt is here: http://www.mockobjects.com/2007/04/test-smell-mocking-concrete-classes.html

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-18

    - by Bob Rhubart
    Eye on Architecture This week the Oracle Technology Network Solution Architect Homepage features an Oracle Reference Architecture for Software Engineering, a new podcast focusing on why IT governance is important whether you like it or not, and information on the next free OTN Architect Day event. Enabling WebLogic Administrator Group Inside Custom ADF Application | Andrejus Baranovskis A short but informative technical post from Oracle ACE Director Andrejus Baranovkis. Oracle OpenWorld 2012 Hands-on Lab: Leading Your Everyday Application Integration Projects with Enterprise SOA Yet another session to squeeze into your already-jammed Oracle OpenWorld schedule. This hands-on lab focuses on how "Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects." Mass Metadata Updates with Folders | Kyle Hatlestad "With the release of WebCenter Content PS5, a new folder architecture called 'Framework Folders' was introduced," explains Fusion Middleware A-Team blogger Kyle Hatlestad. "This is meant to replace the folder architecture of 'Folders_g'. While the concepts of a folder structure and access to those folders through Desktop Integration Suite remain the same, the underlying architecture of the component has been completely rewritten." Creating your first OAM 11g R2 domain | Chris Johnson Prolific Fusion Middleware A-Team Blogger Chris Johnson reads the Oracle Identity and Access Management Installation Guide so you don't have to (though you probably should). Thought for the Day "Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice." — Christopher Alexander Source: SoftwareQuotes.com

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • What alternative is better to diagram this scenario?

    - by Mosty Mostacho
    I was creating and discussing a class diagram with a partner of mine. To simplify things, I've modify the real domain we're working on and made up the following diagram: Basically, a company works on constructions that are quite different one from each other but are still constructions. Note I've added one field for each class but there should be many more. Now, I thought this was the way to go but my partner told me that if in the future new construction classes appear we would have to modify the Company class, which is correct. So the new proposed class diagram would be this: Now I've been wondering: Should the fact that in no place of the application will there be mixed lists of planes and bridges affect the design in any way? When we have to list only planes for a company, how are we supposed to distinguish them from the other elements in the list without checking for their class names? Related to the previous question, is it correct to assume that this type of diagram should be high-level and this is something it shouldn't matter at this stage but rather be thought and decided at implementation time? Any comment will be appreciated.

    Read the article

  • How to manage and estimate unstructured requirements received from customers

    - by user20358
    A lot of the times I receive a software system's requirements from our customers in a very unstructured format. It is usually a bunch of "product development" guys from the customer's who come up with these "proposed solutions" to the business problems they have. While they are the experts at the business domain, a lot of the times they don't have the solutions right. This results in multiple versions of the same requirement mixing up of two requirements into one a few versions of the requirement later down the line, the requirements which were combined together get separated out again, each taking with it some of the new additions How do you work with such requirements coming in and sort them out into proper use cases and before development begins? What tools can we use to track a particular requirement's history, from the first time it was conceived till the time it gets crystallized into a proper use case? Estimating work against requirements received in such a fashion is a nightmare which ends up in making mistakes in understanding the requirement correctly and estimating the effort against it correctly. Any tips, tools, tricks to make this activity more manageable? I'm just trying to get some insights from someone more experienced than I am in requirements management and effort estimation.

    Read the article

  • My new anti-patent BSD-based license: necessary and effective? [closed]

    - by paperjam
    I am writing multimedia software in a domain that is rife with software patents. I want to open source my software but only for the benefit of those who don't play the patent game, that is enthusiasts, small companies, research projects, etc. The idea is, if my code would infringe a software patent somewhere and a company pays to license that patent, they then lose the right to use and distribute my software. Now I detest license proliferation as much as anyone but I can't find an existing OSI approved license that does this. The GPL comes close, but it only restricts distribution, not use. I want to stop someone using my software should they obtain a patent license to do so. Does another license do this job? Is the wording below unambiguous? - I don't want a legal opinion, just whether it would be interpreted as I intend. Copyright (c) <year>, <copyright holder> All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: [ three standard new-BSD conditions not shown here] * No patents are licensed from any third party in respect of redistribution or use of this software or its derivatives unless the patent license is arranged to permit free use and distribution by all. THIS SOFTWARE IS... [standard BSD disclaimer not shown here]

    Read the article

  • Port numbers in Visual Studio projects and IIS

    - by aspdotnetuser
    I have a few questions about localhost and port numbers as this is an area where I do not have a lot of knowledge, and because I recently had to work with setting up Visual Studio projects and IIS and there are things I'm not clear on. I have the following questions on the things I find confusing. I thought it made more sense to include them all in one question instead of making separate questions. I have noticed a random port number is generated with projects I have worked on in the past, but I recently saw a project where the port number was fixed. What is the purpose of having a fixed/default localhost port number? i.e is it particularly useful on projects that have many programmers working on the project? If a solution contains multiple projects (for example, WCF services, Domain, MVC/Web pages), is it possible to setup a different localhost port for each of them? If so, what is the benefit of this? If a solution contains multiple projects and has different localhost urls/port numbers, must there be a corresponding website (and application pool) for each project in IIS? Or just for the project that contains the actual web pages?

    Read the article

  • starting up with VPS or cloud hosting? [closed]

    - by FlyOn
    Possible Duplicate: How to find web hosting that meets my requirements? Summary: I want to start hosting my product. I'd like to register domains (at some point). I'm a linux beginner. Thinking about scalability and price, I'm thinking am I better off on a VPN to get started or would some form of cloud hosting be better (not being familiar with either). Full question: I'm creating a product where people can create their own 3D representations of whatever data / info they have, and (re)organise that data. The product is coming along beautifully on my local environment, but it's about time I start getting some form of hosting ready, and I could really use some advice where / how to get started: I'd like people to be able to move/register their own domains on my server. I could start without this just to demo the product, but it would be the very first on the todo list. I'd like to automatically copy some files / install databases etc for each domain. I probably want to see if I can let users manage their own subdomains at some points, but for now: I'd like start as simple as possible. I've always on a windows machine, so my linux experience is quite basic. I really don't mind getting into it, but I'm thinking it's better to get my product out first of all and see where to go from there. Although... I'd like things to be scalable. If I set up some reseller VPN now which only scales to 100 domains or so, which means I have to set up something else / move again when I pass that level, or which means that I'm in trouble if I suddenly get lots of new customers... hmm. Finally, I need to start cheap. I'm putting all I have into starting this company, and live on very little. So before I have any customers, 50 dollars a month is a fair bit and 100 dollars a month may be too much. If anyone has some tips to help get me started I'd be really grateful.

    Read the article

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • In MVC , DAO should be called from Controller or Model

    - by tito
    I have seen various arguments against the DAO being called from the Controller class directly and also the DAO from the Model class.Infact I personally feel that if we are following the MVC pattern , the controller should not coupled with the DAO , but the Model class should invoke the DAO from within and controller should invoke the model class.Why because , we can decouple the model class apart from a webapplication and expose the functionalities for various ways like for a REST service to use our model class. If we write the DAO invocation in the controller , it would not be possible for a REST service to reuse the functionality right ? I have summarized both the approaches below. Approach #1 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); new CustomerDAO().save(customer); } } Approach #2 public class CustomerController extends HttpServlet { proctected void doPost(....) { Customer customer = new Customer("xxxxx","23",1); customer.save(customer); } } public class Customer { ........... private void save(Customer customer){ new CustomerDAO().save(customer); } } Note- Here is what a definition of Model is : Model: The model manages the behavior and data of the application domain, responds to requests for information about its state (usually from the view), and responds to instructions to change state (usually from the controller). In event-driven systems, the model notifies observers (usually views) when the information changes so that they can react. I would need an expert opinion on this because I find many using #1 or #2 , So which one is it ?

    Read the article

  • Webserver on a rotating server with NAT IP or changing IPs

    - by hpsoftware
    i would have to elaborate my questions so please have patience Explaining the logic. if you are familiar with logmein then it installs a client software on your computer then it kinda keeps tracks where you computer is as long as it's connected to internet. So you can always access your computer no matter where it is whatever it's IP is you just go to logmein.com and then you can just access it. Now what i am asking 1. Let's assume i have a website hosted on my laptop let's call it webserver. so then i move around i have a new IP sometime even on a hotel network is it possible to do something like what logmein does so i can keep moving around my Webserver to new IP but it has some local client or something which keeps updating my IP or something i am sure i would need a gateway server somewhere which is connected to my domain name via DNS so somebody accessing my website www.mywebsite.com goes to my main server then gets routed to my laptop which could be anywhere but my gateway server is able to communicate to my webserver I will keep updating the case description based on comments to make more sense. please have patience with me. Regards

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • Spam bot constantly hitting our site 800-1,000 times a day. Causing loss in sales

    - by akaDanPaul
    For the past 5 months our site has been receiving hits from these 4 sites below; sheratonbd.com newsheraton.com newsheration.com newsheratonltd.com Typically the exact url they come from looks something like this; http://www.newsheraton.com/ClickEarnArea.aspx?loginsession_expiredlogin=85 The spam bot goes to our homepage and stays there for about 1 min and then exist. Luckily we have some pretty beefy servers so it hasn't even come close to overloading our servers yet. Last month I started blocking the IP address's of the spam bots but they seem to keep getting new ones everyday. So far I have blocked over 200 IP address's, below are a few of the ones I have blocked. They all come from Bangladesh. 58.97.238.214 58.97.149.132 180.234.109.108 180.149.31.221 117.18.231.5 117.18.231.12 Since this has been going on for the past 5 months our real site traffic has started to drop, and everyday our orders get lower and lower. Also since these spam bots simply go to our homepage and then leave our bounce rate in analytics has sky rocketed. My questions are; Is it possible that these spam bots are affecting our SEO? 60% of our orders come from natural search, and since this whole thing has started orders have slowly been dropping. What would be the reason someone would want to waste resources in doing this to our site? IP's aren't free and either are domain names, what would be the goal in doing this to us? We have google adwords but don't advertise on extended networks nor advertise in Bangladesh since we don't ship there so they are not making money on adsense. Has anyone experienced anything similar to this? What did you do and what was the final out come?

    Read the article

  • Choosing open source vs. proprietary CMS

    - by jkneip
    Hi- I've been tasked with redesigning a website for a small academic library. While only in charge of the site for 6 months, we've been maintaining static html pages edited in Dreamweaver for years. Last count of our total pages is around 400. Our university is going with an enterprise level solution called Sitefinity, although we maintain our own domain and are responsible to maintain our own presense. Some background-my library has a couple Microsoft IIS servers on which this static html site has been running. I'm advocating for the implementation of a CMS while doing this redesign. The problem is I'm basically the lone webmaster so I have no one to agree or disagree with my choice. There are also only 1-2 content editors right know for the site but a CMS could change that factor. I would like to use the functionality of having servers that run .NET and MS SQL but am more experience setting up and maintaining open source software like Wordpress or Drupal on web hosts. My main concern is choosing a CMS that will be easy to update / maintain / deal with upgrades (i.e., support) in case I'm not there in the future. So I'm wondering how to factor in the open source CMS vs. a relatively inexpensive commercial CMS decision and whether choosing PHP/MySQL vs. ASP.net framework for development environment will play into my decision. Thanks for any input that can be offered based on the details I've given. Thanks, Jason

    Read the article

  • Obscure SPUtility.SendMail Behavior When Manually Passing in Mail Headers

    - by Damon
    There are two ways to send mail in SharePoint: you can either use the mail components from the System.Net namespace, or you can send email using SharePoint's SPUtility.SendMail method.  One of the benefits of the SPUtility.SendMail method is that it uses the mail configuration from SharePoint, so you can manage settings in Central Administration instead of having to go through and modify your web.config file.  SPUtility.SendMail can get the job done, but it's defiantly not as developer friendly as the components from the System.Net namespace.  If you want to CC someone on an email, for example, you do NOT have a nice CC parameter - you have to manually add the CC mail header and pass it into the SPUtility.SendMail method.  I had to do this the other day, and ran into a really obscure issue. If you do NOT pass the headers into the method then SharePoint sends the email using the From Address configured in the Outgoing Mail settings in Central Admin.  If you pass headers into the method, but do not include the from header, then SharePoint sends the mail using the email address of the current user. This can be an issue if your mail server is setup to reject an email from an invalid email address or an email address that is not on your domain.  The way to fix this issue is to always pass in the from header.  If you want to use the configured From address, then you can do the following: SPWebApplication webApp = SPWebApplication.Lookup(new Uri(SPContext.Current.Site.Url)); StringDictionary headers = new StringDictionary(); headers.Add("from", webApp.OutboundMailSenderAddress);

    Read the article

  • 12.10 Wireless hotspot configuration and internet browsing - question

    - by Indian
    In our campus we have a leased line connection from a service provider, which has an external IP W.X.Y.Z. This connection is distributed from the server several sub-networks / subnets as follows: Faculty: 172.33....../ 255.255.0.0 Administration: 172.34......./255.255.255.0 Students: 172.35...../255.255.216.0 A student has a laptop with a fixed IP address 172.35.23.123 / 255.255.216.0 where the IP address is on the ethernet port. The gateways for internet access are 172.31.1.1 and 172.31.1.2. Further the student has a wireless port which is inaccessible in the hostel area. The OS of the student is Ubuntu 12.10. The student in the possession of an android phone on which he wishes to install specific software and therefore wishes to activate the internet therein. The student has already attempted the Wireless hotspot solution which works for 12.04 but has not been successful. Various instructions on the internet have helped the student to do the following Installation of dhcp server and hostapd: sudo apt-get install isc-dhcp-server sudo apt-get install hostapd File: /etc/network/interfaces auto lo iface lo inet loopback auto wlan0 iface wlan0 inet static address 10.10.0.1 netmask 255.255.255.0 dns-nameservers 172.31.1.1 172.31.1.2 File: /etc/dhcp/dhcpd.conf subnet 10.10.0.0 netmask 255.255.255.0 { range 10.10.0.2 10.10.0.4; option routers 10.10.0.1; option domain-name-servers 172.31.1.1 172.31.1.2; default-lease-time 6000; max-lease-time 72000; } File: /etc/hostapd/hostapd.conf interface=wlan0 driver=nl80211 ssid=my_hotspot channel=1 hw_mode=g auth_algs=1 wpa=3 wpa_passphrase=1234567890 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP CCMP rsn_pairwise=CCMP File: /etc/default/hostapd RUN_DAEMON=”yes” DAEMON_CONF=”/etc/hostapd/hostapd.conf” DAEMON_OPTS=”-dd” File: /etc/default/isc-dhcp-server INTERFACES=”wlan0” File: /etc/rc.local iptables -t nat -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE exit 0 After all the configuration, the computer is restarted. The student can see that the hotspot named “my_hotspot” is available. The hotspot also awards an address to the android phone. The student will now be able to browse the internet.

    Read the article

  • Client-Server MMOG & data structures sync when joining / playing

    - by plang
    After reading a few articles on MMOG architecture, there is still one point on which I cannot find much information: it has to do with how you keep in sync server data on the client, when you join, and while you play. A pretty vague question, I agree. Let me refine it: Let's say we have an MMOG virtual world subdivided into geographical cells. A player in a cell is mostly interested in what happens in the cell itself, and all the surrounding cells, not more. When joining the game for the first time, the only thing we can do is send some sort of "database dump" of the interesting cells to the client. When playing, I guess it would be very inefficient to do the same thing regularly. I imagine the best thing to do is to send "deltas" to the client, which would allow keeping the local database in sync. Now let's say the player moves, and arrives in another cell. Surrounding cells change, and for all the new cells the player subscribes, the same technique as used when joining the game has to be used: some sort of "database dump". This mechanic of joining/moving in a cell-based MMOG virtual world interests me, and I was wondering if there were tried and tested techniques in this domain. Thanks!

    Read the article

  • Repeat use of Schema / Rich Snippets Markup i.e LocalBusiness Data

    - by bybe
    I am unable to find official wording and I'm hoping that some Rich Snippets/Schema Guru can give me some insight into proper usage of repeated content when it comes to using markup. I'm building a site that wants to use Schema as the markup type and the owner would like as much usage as possible. The business name, telephone and address will appear on every page now is it valid or even useful to use Rich Snippets on every page where this information is displayed. For example this information appears in the header, and footer of every page of the site and too give you an example of my current markup see below: <body itemscope itemtype="http://schema.org/LocalBusiness"> <header> <a itemprop="url" href="http://www.domain.co.uk/"> <img itemprop="logo" src="image.png" alt="Company Name Logo" /> </a> <span itemprop="telephone">01202 000 000</span> </header> <div> This is where the content will go</div> <footer> <span itemprop="name">Company Name</span> <span itemprop="description"> A small little bit about this company</span> <div itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <span itemprop="streetAddress">Address Goes here</span> <span itemprop="addressLocality">Area Here</span>, <span itemprop="addressRegion">Region Here</span> </div> </footer> </body> !-- Local Business Schema Now Closed --> So as you can see above this information will be displayed on every single page.... Is this valid or bad to repeat usage of this information in schema format...

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >