Search Results

Search found 49554 results on 1983 pages for 'database users'.

Page 694/1983 | < Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • Terminal Server 2003 Performance Troubleshooting

    - by MikeM
    Let me get your thoughts on terminal server performance problems. The server hosts average 25 users which, after running some numbers, on average use 600MB memory with their main applications running (web browser, adobe reader, IP phone client). All users are on the same LAN as server. We constantly experience slow response and short session lockups. Combined CPU usage is on average 10%. What appears strange to me is that the system shows 29GB physical memory with 25GB of it free. The page file usage is about 50% averaging 9GB used. Some server specs OS: Server 2003 32bit Enterprise with /PAE flag RAM: 32GB CPU: 2xQuad Core @ 2.27Ghz HD: RAID5 1.2GB After doing basic troubleshooting using performance monitor it leads me to believe that the performance problems are caused by the 32bit OS limitation in addressing full 32GB of physical memory even though the /PAE flag is used. Can anyone suggest something, troubleshooting steps that can lead to a more conclusive answer? Thanks

    Read the article

  • What are all the components of a "Facebook App"?

    - by pnongrata
    I am a developer who has never personally partaken in social media (in any form) for reasons completely outside the scope of this question. I am "off the grid" (no Facebook, Twitter, etc accounts). I'm currently building a web app and would like the app to have a presence on Facebook, and possibly even "port" my app over as a Facebook app. My understanding of Facebook Apps is that they're just normal web apps that get <iframe>d into a Facebook page. The app is actually hosted on your server (not FB's servers). But this got me thinking: Don't Facebook Apps have "profile pages"? Is there anything developers can do to customize the behavior of their own profile pages? Do apps have the ability to do things like MySpace themes used to do (i.e., customize and interact with User profile pages, Groups, etc.)? Do Facebook Apps gain any sort of extra capabilities (inside of Facebook) that a normal web app would not have? It seems to me like if all a Facebook App is, is an iframed-web app, that it would still need to communicate with Facebook via its many APIs, just like a normal app would have to, right? If it's not possible to write an app that can customize the UI or behavior of user profiles and other pages, then how do games like "Farmville" interact with User profiles so that you see updates to profiles like "John Smith reached level 2 of Farmville"? Basically, I'm asking any battle-worn Facebook app developers if my understanding of Facebook Apps is correct, or if I'm missing anything big here. It's my understanding that for security reasons (obviously) Facebook doesn't allow apps to customize anything outside of the iframe it lives in. So if I want my app to appear like it's "interacting" with its Facebook users, it looks like I just need to publish stuff to the users' news feeds to try and encourage people to use my app (please correct me if I'm wrong here!). Thanks in advance for any corrections, clarifications, advice or suggestions!

    Read the article

  • eSTEP Newsletter December 2012

    - by uwes
    Dear Partners,We would like to inform you that the December issue of our Newsletter is now available.The issue contains informations to the following topics: Notes from Corporate: It's Earth day - Every Day, Oracle SPARC Newsletter, Pre-Built Developer VMs (for Oracle VM VirtualBox), Oracle Database Appliance Now Certified by SAP, Database High Availability, Cultivating Business-Led Innovation Technical Corner: Geek Fest! Talking About the Design of the T4 and T5 SPARC Chips, Blog: Is This Your Idea of Disaster Recovery?; Oracle® Practitioner Guide - A Pragmatic Approach to Cloud Adoption; Oracle Practitioner Guide: A pragmatic Approach to Cloud Adoption; Darren Moffat Explains the new ZFS Encryption Features in Solaris 11.1; Command Summary: Basic Operations with the Image Packaging System; SPARC T4 Server Delivers Outstanding Performance on Oracle Business Intelligence Enterprise Edition 11g; SPARC T4-4 Servers Set First World Record on PeopleSoft HCM 9.1 Benchmark; Sun ZFS Appliance Monitor Refresh: Core Factor Table; Remanufactured Systems Program for Sun Systems from Oracle; Reminder: Oracle Premier Support for Systems; Reminder: Oracle Platinum Services Learning & Events: eSTEP Events Schedule; Recently Delivered Techcasts; Webinar: Maximum Availibility with Oracle GoldenGate References: LUKOIL Overseas Holding Optimizes Oil Field Development Projects with Integrated Project Management; United Networks Increases Accounting Flexibility and Boosts System Performance with ERP Applications Upgrade; Ziggo Rapidly Creates Applications That Accelerate Communications-Service Orders l How to ...: The Role of Oracle Solaris Zones and Oracle Linux Containers in a Virtualization Strategy; How to Update to Oracle Solaris 11.1; Using svcbundle to Create Manifests and Profiles in Oracle Solaris 11.1; How to Migrate Your Data to Oracle Solaris 11 Using Shadow Migration; How to Script Oracle Solaris 11.1 Zones for Easy Cloning; How to Script Oracle Solaris 11 Zones Creation for a Network-in-a-Box Configuration; How to Know Whether T4 Crypto Accelerators Are in Use; Fault Handling and Prevention – Part 1; Transforming and Consolidating Web Data with Oracle Database; Looking Under the Hood at Networking in Oracle VM Server for x86; Best Way to Migrate Data from Legacy File System to ZFS in Oracle Solaris 11; Special Year End Article: The Top 10 Strategic CIO Issues For 2013 You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • Storing data offline with javascript

    - by Walker
    My question is about storing data offline and potentially whether I will need to bring in an outside programmer or could this be learned within a few weeks? The website I am working on will have an interface where users will login and go through a series of quizzes in the form of checkbox, drop down menus, and others. Each page/quiz area could have 20-100 total checkboxes in a series of 3-5 rows because of the comprehensive nature of course. This I can do - I know how to code the quiz and return a correct or incorrect answer based on each individual checkbox and present a cumulative score (ie: you got 57% correct). The issue lies in the fact that I would like to save the users results and keep them informed of their progress. When they complete all of the quizzes, I would like to have a visual output of their performance in each area. Storing the output from their results offline is where I think I may run into a problem with my lack of coding experience. I would also like to have a sidebar with their progress of each section (10-15) with a green percentage completion bar or a % correct which would draw from this. I have never had to code something that stores information like this offline - so back to my question - would it be better to learn the language needed or bring in a coder/developer for the back end stuff.

    Read the article

  • How to restrict access to a specific wireless network to only 1 user profile in Windows 7.

    - by Mathlight
    Hi all, I'm using Win7 SP1. I've got multiple users on the laptop that can / must connect to a wireless network, lets call it Wireless1. I've got an second wireless network, (lets call it Wireless2), which I want to limit access to only the admin user of the laptop. Now I can remove Wireless2 in the network manager every time, but i want a more user friendly solution, so that only the admin can connect to Wireless2, and all the other users cannot ( they may see the network, but must enter the password, like all other networks ). Any ideas?

    Read the article

  • Block by file type, but just file extension using MDaemon

    - by Arjun Rajagopalan
    I've had users sending copyrighted files (songs, videos) to each other over email. I blocked the file extensions .mp3 etc. What some users have done is to rename files to .doc etc. I cant block .doc etc filetypes because they are needed for day-to-day work. I'm using MDaemon 12 mailserver, Does anyone know how to make it block these attachments? I've been working on some content scanning for filetype code, but was wondering if there is a already made solution?

    Read the article

  • MRP/SCP (Not ASCP) Common Issues

    - by Annemarie Provisero
    ADVISOR WEBCAST: MRP/SCP (Not ASCP) Common Issues PRODUCT FAMILY: Manufacturing - Value Chain Planning   March 9, 2010 at 8 am PT, 9 am MT, 11 am ET   This session is intended for System Administrators, Database Administrator's (DBA), Functional Users, and Technical Users. We will discuss issues that are fairly common and will provide the general solutions to same. We will not only review power point information but review some of the application setups/checks as well. TOPICS WILL INCLUDE: Gig data memory limitation Setup Requirements for MRP Manager, Planning Manager, and Standard Manager Why components are not planned Sales Order Flow to MRP Calendars Patching Miscellaneous Forecast Consumption - only if we have time A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • License validation and calling home

    - by VitalyB
    I am developing an application that, when bought, can be activated using a license. Currently I am doing offline validation which is a bit troubling to me. I am aware there is nothing to do against cracks (i.e modified binaries), however, I am thinking to trying to discourage license-key pirating. Here is my current plan: When the user activates the software and after offline validation is successful, it tries to call home and validate the license. If home approves of the license or if home is unreachable, or if the user is offline, the license gets approved. If home is reached and tells the license is invalid, validation fails. Licensed application calls home the same way every time during startup (in background). If license is revoked (i.e pirated license or generated via keygen), the license get deactivated. This should help with piracy of licenses - An invalid license will be disabled and a valid license that was pirated can be revoked (and its legal owner supplied with new license). Pirate-users will be forced to use cracked version which are usually version specific and harder to reach. While it generally sounds good to me, I have some concerns: Users tend to not like home-calling and online validation. Would that kind of validation bother you? Even though in case of offline/failure the application stays licensed? It is clear that the whole scheme can be thwarted by going offline/firewall/etc. I think that the bother to do one of these is great enough to discourage casual license sharing, but I am not sure. As it goes in general with licensing and DRM variations, I am not sure the time I spend on that kind of protection isn't better spent by improving my product. I'd appreciate your input and thoughts. Thanks!

    Read the article

  • How do I (quickly) let people know that software I am providing for free is not abandon-ware?

    - by blueberryfields
    As an independent, individual programmer: How do I let people very quickly know that I have not abandoned the software I've written and given away for free? That I am putting in the effort required to maintain and support my software to a professional level? When software written by one or two developers is available for free, or marked as open-source, usually the default assumption is that it's abandon-ware. This is usually a safe assumption - check out the answers to this question if you doubt it: Why do programmers write applications and then make them free?. There are lots of programmers who provide free and/or open-source tools which are not abandon-ware, though. If we're talking about large companies, ie Google, there's no real problem telling the difference between supported, live tools and software, and those which are abandoned or discontinued. A lively git repository isn't quick - users will have to be savvy enough to understand the repository and know where to look for it. Consistent marketing and community management take more time and effort than I can put in on my own. Also, if my software becomes popular/successful, I assume those will grow on their own, and be supported by power users in the community.

    Read the article

  • Impacting the Future through Collaboration at Alliance 14

    - by Jeb Dasteel-Oracle
    We’re hearing good things about the Alliance 14 conference held in Las Vegas by the Higher Education Users Group (HEUG) back in March. For those of you who aren’t familiar with Alliance 14 conferences, they are global events dedicated to enhancing and educating its members and the world on how higher educational institutions can utilize Oracle applications to change how they do business. The HEUG is an all-volunteer organization made up of individuals who collaborate with Oracle as part of the evolving higher education industry. Conference participants network with peers from other institutions (regionally and globally) to share the challenges; discuss solutions and ideas, and collaborate on HEUG strategic initiatives. The HEUG enables each institution to be a part of the ever-changing Oracle landscape. Watch the video below and hear directly from the attendees about their experience with Oracle and how being part of the HEUG has allowed them to  collaborate with one of their most importance resources... and with each other. Oracle is committed to fostering a strong and independent network of user groups worldwide. Currently over 900+ groups provide dynamic forums for customers to share information, experiences and expertise. If you’re interested in more information or joining an Oracle User Group, click and become part of a vibrant network of engaged users finding the best ways to get the most value from their Oracle investment and collaborating to provide a unified feedback voice to Oracle. Catch you next time, Jeb

    Read the article

  • Nginx + Passenger running a RoR app is returning 401 when 302 is expected

    - by DBruns
    I've got a RoR app running on Passenger on top of Nginx. I'm using devise for my authentication method and have a link that gets sent in an email to users that requires authentication to view. If a user clicks the link from Outlook, and IE is the default browser, IE makes an HTTP request using the following headers: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 401 Unauthorized Content-Type: /; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 401 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 WWW-Authenticate: Basic realm="Application" Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _vxwer_session=[sessionstr]; path=/; HttpOnly X-Runtime: 0.011918 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 31 You need to sign in or sign up before continuing. 0 When the exact same URL is typed into the address bar, it does this: GET http://www.company.com/custom_layouts/108 HTTP/1.1 Accept: image/jpeg, application/x-ms-application, image/gif, application/xaml+xml, image/pjpeg, application/x-ms-xbap, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, */* Accept-Language: en-US User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.company.com Returning: HTTP/1.1 302 Found Content-Type: text/html; charset=utf-8 Transfer-Encoding: chunked Connection: keep-alive Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 2.2.15 Location: http://www.company.com/users/sign_in Cache-Control: no-cache X-UA-Compatible: IE=Edge,chrome=1 Set-Cookie: _xswer_session=[session_info_here]; path=/; HttpOnly X-Runtime: 0.010798 Server: nginx/0.7.67 + Phusion Passenger 2.2.15 (mod_rails/mod_rack) 6f <html><body>You are being <a href="http://www.company.com/users/sign_in">redirected</a>.</body></html> 0 I expect them to return the same thing regardless.

    Read the article

  • How do I publish a Power Point Presentation that is High Quality and no lag on the Web?

    - by Luke Hutton
    I have a ~22MB Power Point Presentation (2007) that I need to be presented on a website for viewing. The file contains audio over several slides and some embedded images. What is the best practices or best way to present the presentation so it gets delivered the quickest and best quality to users? Some ideas I've thought of are: Somehow compress the file (.wav audio files, images) into a smaller presentation and save it as a Power Point Show (pps) so users can download it and use the free Power Point viewer? Convert it to video format (.avi) or something and stream it off the web? (Hopefully freeware) Save it as a web page? (but then it's only viewable in IE I believe)

    Read the article

  • Test Doubles : Do they go in "source packages" or "test packages"?

    - by sbrattla
    I've got a couple of data access objects (DefaultPersonServices.class, DefaultAddressServices.class) which is responsible for various CRUD operations in a database. A few different classes use these services, but as the services requires that a connection is established with a database I can't really use them in unit tests as they take too long. Thus, I'd like to create a test doubles for them and simply do FakePersonServices.class and FakeAddressService.class implementations which I can use throughout testing. Now, this is all good (I assume)...but my question relates to where I put the test doubles. Should I keep them along with the default implementations (aka "real" implementations) or should I keep them in a corresponding test package. The default implementations are found in Source Packages : com.company.data.services. Should I keep the test doubles here too, or should the test doubles rather be in Test Packages : com.company.data.services?

    Read the article

  • New Marketing Assets Available

    - by swalker
    NEW translated demand generation materials available for the following Oracle Marketing Kits, designed to help partners generate sales around Oracle's solutions: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • Workflow Automation software for SVN

    - by KyleMit
    We're currently using IBM's ClearQuest for task management and ClearCase for change management. They plug and play very well with each other. Users can create tasks in ClearCase as defects and enhancements, and developers can use those tasks to check out and modify code in source control. We're looking to upgrade to a better, more modern Source Control system, like SVN, although we're not married to that as our Source Control system. There are loads of source control systems out there, but I'm having difficulty finding one that also includes the ability to have users enter tasks and track them, especially in a native way to the source control system itself. Are there any products that replace ClearQuest for systems like SVN? Are there any other cheap / open source application pairs that handle both sides of the coin?

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Gmail Now Supports Google Drive Integration; Share Files Up to 10GB

    - by Jason Fitzpatrick
    Gmail users can now easily send large files thanks to Google Drive’s increased integration with Gmail–blow through the 25MB in-email attachment limit and share files up to 10GB. From the official Gmail announcement: Have you ever tried to attach a file to an email only to find out it’s too large to send? Now with Drive, you can insert filesup to 10GB – 400 times larger than what you can send as a traditional attachment. Also, because you’re sending a file stored in the cloud, all your recipients will have access to the same, most-up-to-date version.  Like a smart assistant, Gmail will also double-check that your recipients all have access to any files you’re sending. This works like Gmail’s forgotten attachment detector: whenever you send a file from Drive that isn’t shared with everyone, you’ll be prompted with the option to change the file’s sharing settings without leaving your email. It’ll even work with Drive links pasted directly into emails.  The new Gmail/Drive integration is rolling out in waves to users over the next few days and is accessible via the new Gmail compose window. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • MongoDB: Replicate data in documents vs. “join”

    - by JavierCane
    Disclaimer: This is a question derived from this one. What do you think about the following example of use case? I have a table containing orders. These orders has a lot of related information needed by my current queries (think about the products; the buyer information; the region, country and state of the sale point; and so on) In order to think with a de-normalized approach, I don't have to put identifiers of these related items in my main orders collection. Instead, I have to repeat all the information for each order (ie: I will repeat the buyer's name, surname, etc. for each of its orders). Assuming the previous premise, I'm committing to maintain all the data related to an order without a lot of updates (because if I modify the buyer's name, I'll have to iterate through all orders updating the ones made by the same buyer, and as MongoDB blocks at a document level on updates, I would be blocking the entire order at the update moment). I'll have to replicate all the products' related data? (ie: category, maker and optional attributes like color, size…) What if a new feature is requested and I've to make a lot of queries with the products "as the entry point of the query"? (ie: reports showing the products' sales performance grouping by region, country, or whatever) Is it fair enough to apply the $unwind operation to my orders original collection? (What about the performance?) I should have to do another collection with these queries in mind and replicate again all the products' information (and their orders)? Wouldn't be better to store a product_id in the original orders collection in order to be more tolerable to requirements change? (What about emulating JOINs?) The optimal approach would be a mixed solution with a RDBMS system like MySQL in order to retrieve the complete data? I mean: store products, users, and location identifiers in the orders collection and have queries in MySQL like getAllUsersDataByIds in which I would perform a SELECT * FROM users WHERE user_id IN ( :identifiers_retrieved_from_the_mongodb_query )

    Read the article

  • Network DFS Shares Jumping Back To Root

    - by Taz
    We map several network drives to DFS locations via a logon script. Recently we've had a number of users complain of a very unusual behaviour when navigating these shares. They will be going through folders and will get 'rubber-banded' back to the root of the share. This will happen for a few minutes and then go back to behaving normally. The users are on Windows 7 and the fileshare is on Windows Server 2K8R2. Any idea what could be causing this annoying behaviour?

    Read the article

  • Using mod_speling with multi-level htaccess and rewriterules

    - by michaelcgorman
    We recently switched formats for managing our 301s. For the most part, everything went well, but it seems to have stopped mod_speling from working properly. Here's what we changed: old /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] Redirect 301 /academics/calendar2012-13.html http://www.example.edu/academics/calendar.html Redirect 301 /academics/departments/ http://www.example.edu/majors/ Redirect 301 /academics/Pre-Medical.pdf http://www.example.edu/academics/Pre-Medicine.pdf Redirect 301 ... new /var/www/html/.htaccess: RewriteEngine on RewriteBase / # Change SHTML to HTML RewriteRule ^(.*)\.shtml$ $1.html [R=permanent,L] # Change PCF to HTML ('cause, you know, we probably have CMS users like that...) RewriteRule ^(.*)\.pcf$ $1.html [R=permanent,L] # Force WWW subdomain for all requests RewriteCond %{HTTP_HOST} !^www.example.edu$ [NC] RewriteRule ^(.*)$ http://www.example.edu/$1 [R,L] # User accounts are on sun.example.edu RedirectMatch ^/~(.*)$ http://sun.example.edu/~$1 # Remove index.html at the end of URLs RewriteCond %{REQUEST_URI} ^(.*/)index\.html$ [NC] RewriteRule . %1 [R=301,NE,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*) 404/$1 And then we added a new file at /var/www/html/404/.htaccess: RewriteEngine on RewriteBase /404 RewriteRule ^academics/calendar2012-13.html$ /academics/calendar.html [R=302,L] RewriteRule ^academics/departments/$ /majors/ [R=301,L] RewriteRule ^academics/Pre-Medical.pdf$ /academics/Pre-Medicine.pdf[R=301,L] RewriteRule ... I do have (Webmin-based) access to the httpd.conf (though we don't want to store all our 301s there, if possible). We're running Apache 2.2.15 on RHEL 6 on a server in our own data center. Like I said, the only problem we're seeing is that mod_speling isn't doing its magic anymore. The new format has so many advantages over the old that we really don't want to go back, but mod_speling is so nice to have that we'd also really like it to work if possible. Any ideas for how we might be able to fix mod_speling?

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • Cloud Deployment Models

    - by B R Clouse
    Normal 0 false false false EN-US X-NONE X-NONE As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective.  This blog is a basic review of  cloud deployment models, to help orient newcomers and neophytes. Most cloud deployments today are either private or public. It is also possible to connect a private cloud and a public cloud to form a hybrid cloud. A private cloud is for the exclusive use of an organization. Enterprises, universities and government agencies throughout the world are using private clouds. Some have designed, built and now manage their private clouds. Others use a private cloud that was built by and is now managed by a provider, hosted either onsite or at the provider’s datacenter. Because private clouds are for exclusive use, they are usually the option chosen by organizations with concerns about data security and guaranteed performance. Public clouds are open to anyone with an Internet connection. Because they require no capital investment from their users, they are particularly attractive to companies with limited resources in less regulated environments and for temporary workloads such as development and test environments. Public clouds offer a range of products, from end-user software packages to more basic services such as databases or operating environments. Public clouds may also offer cloud services such as a disaster recovery for a private cloud, or the ability to “cloudburst” a temporary workload spike from a private cloud to a public cloud. These are examples of a hybrid cloud. These are most feasible when the private and public clouds are built with similar technologies. Usually people think of a public cloud in terms of a user role, e.g., “Which public cloud should I consider using?” But someone needs to own and manage that public cloud. The company who owns and operates a public cloud is known as a public cloud provider. Oracle Database Cloud Service, Amazon RDS, database.com and Savvis Symphony Database are examples of public cloud database services. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When evaluating deployment models, be aware that you can use any or all of the available options. Some workloads may be best-suited for a private cloud, some for a public or hybrid cloud. And you might deploy multiple private clouds in your organization. If you are going to combine multiple clouds, then you want to make sure that each cloud is based on a consistent technology portfolio and architecture. This simplifies management and gives you the greatest flexibility in moving resources and workloads among your different clouds. Oracle’s portfolio of cloud products and services enables both deployment models. Oracle can manage either model. Universities, government agencies and companies in all types of business everywhere in the world are using clouds built with the Oracle portfolio. By employing a consistent portfolio, these customers are able to run all of their workloads – from test and development to the most mission-critical -- in a consistent manner: One Enterprise Cloud, powered by Oracle.   /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Add domain user as local admin in Windows 7 using VPN to connect to domain

    - by kev
    I am rebuilding my work computer from scratch and need to add my domain user as a local admin on my computer. I have successfully added my PC to the domain, but I cannot add my domain user account to the local admins. I have tried to do the following: Connect to the work domain using a Windows VPN Add my computer to the work domain Start right click on Computer Manage - go to Users and Groups right click on Administrators group and add my domain user The problem is that after adding my domain user to the Administrators group, I don't see my domain user under the Local Users group. When I try to log on as my domain user I get the following error message: There are currently no logon servers available to service the logon request Any ideas?

    Read the article

< Previous Page | 690 691 692 693 694 695 696 697 698 699 700 701  | Next Page >