Search Results

Search found 15403 results on 617 pages for 'request querystring'.

Page 233/617 | < Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Book My Cloud Offering FREE PREMIUM Cpanel Accounts

    - by asd
    Book My Cloud Offering FREE PREMIUM Cpanel Accounts Reuqest Type: http://support.bookmycloud.com/ Select Request Type Free Cpanel Hosting Related Features: Resources Disk quota : 10 GB Monthly bandwidth : 300 GB Max FTP Accounts : 5 Max Email Accounts : Unlimited Max Email Lists : Unlimited Max Databases : 500 Max Sub Domains : 500 Max Parked Domains : 100 Max Addon Domains : 1000 Control Panel: Cpanel NO Ads Full DNS Management

    Read the article

  • New OPN Certified Implementation Exam - Oracle Enterprise Manager 12C Essentials

    - by Javier Puerta
    This exam (1Z1-457) is now available in Beta testing. This certification differentiates candidates in the marketplace by providing a competitive edge through proven expertise and is intended for professionals who possess a strong technical background and exposure to Oracle Enterprise Manager 12c implementation. Take advantage and request a free beta exam voucher to [email protected]. Invite your partners to learn more about this exam here.

    Read the article

  • Can a NodeJS webserver handle multiple hostnames on the same IP?

    - by Matthew Patrick Cashatt
    I have just begun learning NodeJS and LOVE it so far. I have set up a Linux box to run it and, in learning to use the event-driven model, I am curious if I can use a common IP for multiple domain names. Could I point, for example, www.websiteA.com, www.websiteB.com, and www.websiteC.com all to the same IP (node webserver) and then route to the appropriate source files based on the request? Would this cause certain doom when it came to scaling to any reasonable size?

    Read the article

  • The Evolution of the Moon – Past to Present [Video]

    - by Asian Angel
    The Moon seems like a pretty quiet place these days, but it has not always been so. Take a journey through time and watch the evolution of the Moon with this terrific video from NASA! NASA | Evolution of the Moon [via Geeks are Sexy] The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage Reader Request: How To Repair Blurry Photos

    Read the article

  • my site works with www.my site.com but not like this http://www.my site.com

    - by toocool
    this is the site that I am going to develop: http://www.juve-news.com/ it works like this but it doesnt when I try to open it without the www prefix. it give me 400 bad request! I have used other web host before now I am trying a new one and I have to add some dns entries like in the picture here: http://cloudcontrol.com/developers/documentation/add-ons/aliases/ I dont know what I have done wrong. If anyone knows what could be the problem then please give me a tip.

    Read the article

  • MIX10 Talk - Slides and Code

    At MIX10, I presented the Developing with WCF RIA Services Quickly and Effectively talk. For the demos I used a BookClub application - the scenario is a team application for team members to use to share books and browse/search/request shared books. The following are the concepts that the demos and code cover: [Full post continued here...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WebCenter Customer Spotlight: Marvel

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryMarvel Entertainment, LLC (Marvel) is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years. The customer wanted to optimize their brand licensing process, so Marvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. Marvel and their brand licensees have  now complete visibility of brand license operations including the history of approval request and related content.  Company OverviewMarvel Entertainment, LLC (Marvel) a wholly-owned subsidiary of The Walt Disney Company, is one of the world's most prominent character-based entertainment companies, built on a proven library of over 8,000 characters featured in a variety of media over seventy years.  Marvel utilizes its character franchises in entertainment, licensing and publishing.   Sample  characters:    - Spider-Man    - Iron Man    - Captain America    - X-MEN    - Thor    - Avengers    - And a host of others  Business ChallengesMarvel wanted to optimize their brand licensing process for their characters and had following business requirements : Facilitating content worldwide Scalable and flexible infrastructure to manage multiple content types and huge file sizes Optimize the licensing process workflow trough automatic notifications, tracking reviews, issuing approvals, etc. Solution DeployedMarvel worked with Oracle WebCenter partner Fishbowl Solutions and implemented a centralized Content Hub based on Oracle WebCenter Content. The 100% web based secure Intranet/Partner Extranet solution is now managing the entire life cycle of the brand licensing process. The internal users can now manage all digital assets related to a character trough proper categorization of all items, workflow based review and approval of branding styles and a powerful search and retrieval service. The licensees of Marvel brands can now online develop and submit  concepts and prototypes which are reviewed and approved using a collaborative process. Business ResultMarvel and their brand licensees have now complete visibility of brand license operations including the history of approval request and related content. The character brand related content is now in the right place, at the right time at the user's fingertips with highly improved quality. Additional Information Marvel Open World Presentation Oracle WebCenter Content

    Read the article

  • Using a front controller design pattern doesn't allow images to be served

    - by MrMe TumbsUp
    I am currently using a front controller. All requests for my website go through it. I have a problem with image links like: <img src="img/image.jpg" /> Then my front controller will try to dispatch the request to: application/controller/ImgController.php. Then the image won't load. I think it has something to do with the .htaccess file: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L]

    Read the article

  • Announcing the ASP.NET MVC 3 Release Candidate

    In this article, Scott provides a detailed overview of the features included with ASP.NET MVC 3 Release Candidate. He examines some of the key features such as Razor Intellisense within Visual Studio, NuGet Package Manager, Partial Page Output Caching, Unobtrusive JavaScript and Validation, Remote Validator and Granular Request Validation. He also provides the links to the PDC Talk rendered by Scott Hanselman regarding ASP.NET MVC 3 including new improvements shipped with the ASP.NET MVC 3 Release Candidate.

    Read the article

  • Partner Blog Series: PwC Perspectives - Looking at R2 for Customer Organizations

    - by Tanu Sood
    Welcome to the first of our partner blog series. November Mondays are all about PricewaterhouseCoopers' perespective on Identity and R2. In this series, we have identity management experts from PricewaterhouseCoopers (PwC) share their perspective on (and experiences with) the recent identity management release, Oracle Identity Management R2. The purpose of the series is to discuss real world identity use cases that helped shape the innovations in the recent R2 release and the implementation strategies that customers are employing today with expertise from PwC. Part 1: Looking at R2 for Customer Organizations In this inaugural post, we will discuss some of the new features of the R2 release of Oracle Identity Manager that some of our customer organizations are implementing today and the business rationale for those. Oracle's R2 Security portfolio represents a solid step forward for a platform that is already market-leading.  Prior to R2, Oracle was an industry titan in security with reliable products, expansive compatibility, and a large customer base.  Oracle has taken their identity platform to the next level in their latest version, R2.  The new features include a customizable UI, a request catalog, flexible security, and enhancements for its connectors, and more. Oracle customers will be impressed by the new Oracle Identity Manager (OIM) business-friendly UI.  Without question, Oracle has invested significant time in responding to customer feedback about making access requests and related activities easier for non-IT users.  The flexibility to add information to screens, hide fields that are not important to a particular customer, and adjust web themes to suit a company's preference make Oracle's Identity Manager stand out among its peers.  Customers can also expect to carry UI configurations forward with minimal migration effort to future versions of OIM.  Oracle's flexible UI will benefit many organizations looking for a customized feel with out-of-the-box configurations. Organizations looking to extend their services to end users will benefit significantly from new usability features like OIM’s ‘Catalog.’  Customers familiar with Oracle Identity Analytics' 'Glossary' feature will be able to relate to the concept.  It will enable Roles, Entitlements, Accounts, and Resources to be requested through the out-of-the-box UI.  This is an industry-changing feature as customers can make the process to request access easier than ever.  For additional ease of use, Oracle has introduced a shopping cart style request interface that further simplifies the experience for end users.  Common requests can be setup as profiles to save time.  All of this is combined with the approval workflow engine introduced in R1 that provides the flexibility customers need to meet their compliance requirements. Enhanced security was also on the list of features Oracle wanted to deliver to its customers.  The new end-user UI provides additional granular access controls.  Common Help Desk use cases can be implemented with ease by updating the application profiles.  Access can be rolled out so that administrators can only manage a certain department or organization.  Further, OIM can be more easily configured to select which fields can be read-only vs. updated.  Finally, this security model can be used to limit search results for roles and entitlements intended for a particular department.  Every customer has a different need for access and OIM now matches this need with a flexible security model. One of the important considerations when selecting an Identity Management platform is compatibility.  The number of supported platform connectors and how well it can integrate with non-supported platforms is a key consideration for selecting an identity suite.  Oracle has a long list of supported connectors.  When a customer has a requirement for a platform not on that list, Oracle has a solution too.  Oracle is introducing a simplified architecture called Identity Connector Framework (ICF), which holds the potential to simplify custom connectors.  Finally, Oracle has introduced a simplified process to profile new disconnected applications from the web browser.  This is a useful feature that enables administrators to profile applications quickly as well as empowering the application owner to fulfill requests from their web browser.  Support will still be available for connectors based on previous versions in R2. Oracle Identity Manager's new R2 version has delivered many new features customers have been asking for.  Oracle has matured their platform with R2, making it a truly distinctive platform among its peers. In our next post, expect a deep dive into use cases for a customer considering R2 as their new Enterprise identity solution. In the meantime, we look forward to hearing from you about the specific challenges you are facing and your experience in solving those. Meet the Writers Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years. Praveen Krishna is a Manager in the Advisory  Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving.

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • You don't have permission to access /index.php on this server

    - by Tran Dinh Thoai
    I made a 'login with OpenID' page and I had a error when OpenID provider return to my page: You don't have permission to access /index.php on this server. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. If I remove parameters which are returned by OpenID provider, the page run well. How can I fix this problem? The login page that cause error is: http://bryox.com/login

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • How are video card\graphics drivers written exactly? [closed]

    - by Bigyellow Bastion
    I was thinking, since I want an application layer for my operating system, how or what would be the best way to write, or better understand how to write, my own graphics device drivers or software to enable application layered software to request data through my graphics system (I.e. windows, icons, mouse pointers). I don't need every little detail, I'd just like some possible insight on the procedure one may accomplish this feat in, perhaps the best way to understand the interaction between the graphics GUI rendering software to the hardware interaction directly, and the application software requesting the graphics driver itself.

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

  • Ideas for time-keeping in a webbased RPG?

    - by ashy_32bit
    I'm assigned a task of doing the preliminary research stuff for a web-based MMO RPG. Now my buggiest problem here is "web based" vs "MMO RPG". I did some research about time keeping systems and I'm totally confused as how exactly something as real-time as an MMO-RPG can work on some pull-only (unidirectional) platform like HTTP. I know there is also a turn-based alternative to time keeping but can it work in an MMO setting ? EDIT: Take a battle for example, player A (human) wants to attack Player B (also human) in the open. How does it work when when player A issues the "attack" command on player B ? how do I inform player B that he is being attacked ? and then how exactly the battle goes on between the two in an HTTP based communication channel? To my knowledge this is impossible unless you resort to another technology (HTML is 1-way, that is you can just ask server and get response, server can't update you unless being asked to. this is very well-known and simply explained). So I though maybe I can somehow change the whole timekeeping model from real-time to a more non-real-time model (towards a turn based RPG for example) and somehow work around the whole problem of "interactivity". EDIT2: It is not that I don't wanna use any server side technologies. For sure it is not gonna work client-side-only even for the most trivial of the multi-player games, let alone an RPG. So sure there would be a (probably complex) server side component to it (the so called Game Engine I suppose). The problem is not the technology that implements the logic (game mechanics) bits but the communication technology and how it limits the game mechanics abilities (like how real-time or turn based it is gonna be). HTTP is a request-response protocol meaning you get served only if you ask for it (explicitly send a GET or POST request to the server). HTTP server can not inform you if anything of interest happens in the game world unless you refresh the page (as some suggested) or you use some bi-directional tech (totally different animals) like Flash, WebSock, HTML5 etc etc. So maybe the question is: Is it possible to implement a MMORPG using only HTML5/PHP and no periodic page refreshes? if so what would be rules to make it an MMO-RPG? Can't explain it any clearer. Sorry :D

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • Oracle E-Business Suite Tip : SQL Tracing

    - by Giri Mandalika
    Issue: Attempts to enable SQL tracing from concurrent request form fails with error: Function not available to this responsibility. Change Responsibilities or contact your System Administrator Resolution: Switch responsibility to "System Administrator". Navigate to System - Profiles, and query for "%Diagnostics% ("Utilities : Diagnostics")". Once found the profile, change its value to "Yes". Restart web browser and try enabling SQL trace again.

    Read the article

  • EBS Concurrent Processing Information Center

    - by LuciaC
    Do you have problems or questions about concurrent request processing?  Do you want to know: How and when to run CP Diagnostics? What are the latest Hot Topics being looked at for Concurrent Processing? All about the Concurrent Process Analyzer self-service Health-Check script? Go to the EBS Concurrent Processing Information Center (Doc ID 1304305.1) and find out the above and lots more!

    Read the article

  • Ubuntu 11.10 crashes all browsers often

    - by murat
    I have been using ubuntu 11.10 for 1 year,today it made me surprised. When I open google chrome it just closes itself. Firstly i thought that it is just for chrome and tried Firefox : It also closes itself and one thing more : I tried desktop programs such as image viewer it also closes itself.I restarted it but there is no changing.What can do this ? is it virus or another system problem ? I did not have any problem like this until today.. $ google-chrome (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (google-chrome:7064): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", --2012-04-04 19:33:14-- https://clients2.google.com/cr/report Resolving clients2.google.com... 173.194.70.100, 173.194.70.101, 173.194.70.102, ... Connecting to clients2.google.com|173.194.70.100|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] 2012-04-04 19:33:19 (888 KB/s) - `/dev/fd/3' saved [16] (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7166): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", Moonlight: 3.99.0.3 Moonlight: Attempting to load libmoonloaderxpi (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (exe:7201): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory Segmentation fault After trying to install libvdpau1 error is changed : google-chrome --2012-04-04 20:05:03-- https://clients2.google.com/cr/report Resolving clients2.google.com... 173.194.70.113, 173.194.70.138, 173.194.70.139, ... Connecting to clients2.google.com|173.194.70.113|:443... connected. HTTP request sent, awaiting response... Moonlight: 3.99.0.3 Moonlight: Attempting to load libmoonloaderxpi 200 OK Length: unspecified [text/html] Saving to: `/dev/fd/3' [<=> ] 0 --.-K/s f4c55117d1b4656e [ <= ] 16 --.-K/s in 0s 2012-04-04 20:05:12 (337 KB/s) - `/dev/fd/3' saved [16] Segmentation fault

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • HTG Explains: What Can You Find in an Email Header?

    - by Jason Faulkner
    Whenever you receive an email, there is a lot more to it than meets the eye. While you typically only pay attention to the from address, subject line and body of the message, there is lots more information available “under the hood” of each email which can provide you a wealth of additional information. Reader Request: How To Repair Blurry Photos HTG Explains: What Can You Find in an Email Header? The How-To Geek Guide to Getting Started with TrueCrypt

    Read the article

< Previous Page | 229 230 231 232 233 234 235 236 237 238 239 240  | Next Page >