Search Results

Search found 25005 results on 1001 pages for 'sequential number'.

Page 394/1001 | < Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >

  • asp.net website development component / APIs

    - by Haseeb Asif
    I have been assigned a new website project to work on in my organization where my role demand to finalize all the tools/technologies/controls/api etc. That website will something like online store, where every user has his online store as subdomain e.g. user1.myprojectdomain.com I have been researching a number of things to use and need your suggestions in following levels ASP.NET web forms vs Asp.net MVC: Prefering asp.net webforms due to following reason with N Tier Architecture Rapid application Development large set of Toolbox/controls And mainly due to our team skill set Errorlogging Elmah seems to be a nice library Forums Forums Yetanotherforums On line Live Chat still looking for something (Working on SignalR) Signups with Social Media Engage by Janrain And I need help that how can Manage sub domains. Do we create a Virtual Directory/application for every user in the IIS on runtime or we can do some thing else

    Read the article

  • Announcing a new Free Windows Azure Platform Trial offer

    - by Eric Nelson
    We now have a  truly useful Windows Azure Platform trial. Which makes me very happy as I was a vocal critic of the original trial offer. Simply put, the small number of compute hours it included made it useless for many potential early adopters. This is now fixed. The new Introductory Special now includes a generous 750 hours of compute – enough to run a web role 24/7. Enjoy! Related Links Full announcement If you are an ISV then there is a better offer for you via Microsoft Platform Ready and Cloud Essentials and keep an eye on our events for ISVs as we will be doing Windows Azure Platform technical briefings starting March 31st.

    Read the article

  • Cursor freezes for 5 secs every now and then

    - by user20560
    I've installed Ubuntu 11.04 (64bit) on my new Thinkpad Edge 11 laptop from Lenovo with the following specs: Processor type AMD Athlon II Neo Processor Speed 1.8 GHz Memory Type DDR3 SDRAM RAM 2048 MB Hard Drive Type HDD Harddisk 250 GB Grafic processor ATI Mobility Radeon HD 6310 Ubuntu has found all my hardware and it works perfectly. I have one irritating problem though: From time to time (sometimes every minute, other times every hour)the cursor freezes for about 5 sec. This happens independently from the number of processes running on the laptop. It's only the cursor that freezes - I can still tab between windows and use the keyboard. I've installed GPointingDeviceSettings, activating the trackpoint, which btw works perfectly. Also I have installed the ATI Catalyst proprietary display driver. Anyone has an idea of whats wrong? Thank you in advance Best regards, Jens

    Read the article

  • New Whitepaper: Deploying E-Business Suite on Exadata and Exalogic

    - by Elke Phelps (Oracle Development)
    Our E-Business Suite Performance Team recently published a new whitepaper to assist you with deploying E-Business Suite on the Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine , also referred to as Exastack.  If you are considering a migration to Exastack, this new whitepaper will assist you understanding sizing requirements, deployment standards and migration strategies: Deploying Oracle E-Business Suite on Oracle Exalogic Elastic Cloud and Oracle Exadata Database Machine (Note 1460742.1) This whitepaper covers the following topics: Scalability and Sizing Examples - provides performance benchmark analysis with concurrent user counts, scaling analysis and sizing recommendations Deployment Standards - includes recommendations for deploying the various components of the E-Business Suite architecture on Exastack Migration Standards and Guidelines - includes an overview of methods for migrating from commodity hardware to Exastack References Our Maximum Availability Architecture (MAA) team has a number of whitepapers that provide additional information regarding Oracle E-Business Suite on the Oracle Exadata Database Machine.  Their library of whitepapers may be found here: MAA Best Practices - Oracle Applications Unlimited  Related Articles Running E-Business Suite on Exadata V2 Running Oracle E-Business Suite on Exalogic Elastic Cloud

    Read the article

  • ROracle support for TimesTen In-Memory Database

    - by Sherry LaMonica
    Today's guest post comes from Jason Feldhaus, a Consulting Member of Technical Staff in the TimesTen Database organization at Oracle.  He shares with us a sample session using ROracle with the TimesTen In-Memory database.  Beginning in version 1.1-4, ROracle includes support for the Oracle Times Ten In-Memory Database, version 11.2.2. TimesTen is a relational database providing very fast and high throughput through its memory-centric architecture.  TimesTen is designed for low latency, high-volume data, and event and transaction management. A TimesTen database resides entirely in memory, so no disk I/O is required for transactions and query operations. TimesTen is used in applications requiring very fast and predictable response time, such as real-time financial services trading applications and large web applications. TimesTen can be used as the database of record or as a relational cache database to Oracle Database. ROracle provides an interface between R and the database, providing the rich functionality of the R statistical programming environment using the SQL query language. ROracle uses the OCI libraries to handle database connections, providing much better performance than standard ODBC.The latest ROracle enhancements include: Support for Oracle TimesTen In-Memory Database Support for Date-Time using R's POSIXct/POSIXlt data types RAW, BLOB and BFILE data type support Option to specify number of rows per fetch operation Option to prefetch LOB data Break support using Ctrl-C Statement caching support Times Ten 11.2.2 contains enhanced support for analytics workloads and complex queries: Analytic functions: AVG, SUM, COUNT, MAX, MIN, DENSE_RANK, RANK, ROW_NUMBER, FIRST_VALUE and LAST_VALUE Analytic clauses: OVER PARTITION BY and OVER ORDER BY Multidimensional grouping operators: Grouping clauses: GROUP BY CUBE, GROUP BY ROLLUP, GROUP BY GROUPING SETS Grouping functions: GROUP, GROUPING_ID, GROUP_ID WITH clause, which allows repeated references to a named subquery block Aggregate expressions over DISTINCT expressions General expressions that return a character string in the source or a pattern within the LIKE predicate Ability to order nulls first or last in a sort result (NULLS FIRST or NULLS LAST in the ORDER BY clause) Note: Some functionality is only available with Oracle Exalytics, refer to the TimesTen product licensing document for details. Connecting to TimesTen is easy with ROracle. Simply install and load the ROracle package and load the driver. > install.packages("ROracle") > library(ROracle) Loading required package: DBI > drv <- dbDriver("Oracle") Once the ROracle package is installed, create a database connection object and connect to a TimesTen direct driver DSN as the OS user. > conn <- dbConnect(drv, username ="", password="", dbname = "localhost/SampleDb_1122:timesten_direct") You have the option to report the server type - Oracle or TimesTen? > print (paste ("Server type =", dbGetInfo (conn)$serverType)) [1] "Server type = TimesTen IMDB" To create tables in the database using R data frame objects, use the function dbWriteTable. In the following example we write the built-in iris data frame to TimesTen. The iris data set is a small example data set containing 150 rows and 5 columns. We include it here not to highlight performance, but so users can easily run this example in their R session. > dbWriteTable (conn, "IRIS", iris, overwrite=TRUE, ora.number=FALSE) [1] TRUE Verify that the newly created IRIS table is available in the database. To list the available tables and table columns in the database, use dbListTables and dbListFields, respectively. > dbListTables (conn) [1] "IRIS" > dbListFields (conn, "IRIS") [1] "SEPAL.LENGTH" "SEPAL.WIDTH" "PETAL.LENGTH" "PETAL.WIDTH" "SPECIES" To retrieve a summary of the data from the database we need to save the results to a local object. The following call saves the results of the query as a local R object, iris.summary. The ROracle function dbGetQuery is used to execute an arbitrary SQL statement against the database. When connected to TimesTen, the SQL statement is processed completely within main memory for the fastest response time. > iris.summary <- dbGetQuery(conn, 'SELECT SPECIES, AVG ("SEPAL.LENGTH") AS AVG_SLENGTH, AVG ("SEPAL.WIDTH") AS AVG_SWIDTH, AVG ("PETAL.LENGTH") AS AVG_PLENGTH, AVG ("PETAL.WIDTH") AS AVG_PWIDTH FROM IRIS GROUP BY ROLLUP (SPECIES)') > iris.summary SPECIES AVG_SLENGTH AVG_SWIDTH AVG_PLENGTH AVG_PWIDTH 1 setosa 5.006000 3.428000 1.462 0.246000 2 versicolor 5.936000 2.770000 4.260 1.326000 3 virginica 6.588000 2.974000 5.552 2.026000 4 <NA> 5.843333 3.057333 3.758 1.199333 Finally, disconnect from the TimesTen Database. > dbCommit (conn) [1] TRUE > dbDisconnect (conn) [1] TRUE We encourage you download Oracle software for evaluation from the Oracle Technology Network. See these links for our software: Times Ten In-Memory Database,  ROracle.  As always, we welcome comments and questions on the TimesTen and  Oracle R technical forums.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Formalizing programmers errors

    - by Maksee
    Every one of us make errors leading to bugs. Once I wanted to start logging my errors for future analysis, probably mentioning project title, approximate time spent and the most important, the type of error. For example when I copy-pasted a fragment about 'x' and replaced every occurrence of 'x' with 'y' and forgot to replace a tiny piece, this goes to 'copy-paste error'. The usefulness of this approach depends on whether I can formalize my errors at all and probably minimizing the number of types to choose from. Otherwise I would start postponing, ignoring and so on so make this system useless. Are there existing research in this area, probably a known minimum set of errors? Maybe some of you already tried to implement something like this and succeeded/failed?

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • How to promote travel blog? [closed]

    - by Tschareck
    Possible Duplicate: What are the best ways to increase your site's position in Google? How can I increase the traffic to my site? I know this question might seem a little off topic, but blogging may become important part of travel. Nowadays, in time of Facebook, Twitter and similar services, keeping a travel blog may seem a little archaic. It's not 2005 anymore. But a lot of my travel colleagues update their blogs and have significant number of readers. I also tried to keep my blog when I travel. However it seems that the only reader is my mum ;) What is your advice on promoting a travel blog?

    Read the article

  • Track those visitors who come through a particular link

    - by busybee235
    I want to track visitors who come to my site through a particular link. For example, those visitors coming from http://www.domain.com/abc123, I can get their pageviews, time on site, bounce rate, referrer pages per visit etc. After that I can store that info into by database on daily basis. Can anyone suggest any service or api or any software for the same? I have used Google Analytics utm tags that work straight well for my requirement but I don't know how many links I can track with it. I have around 80-100 links to track a day and the number of links will be increasing. I couldn't find any documentation regarding limit of campaigns in GA. If there's no such limit, I can start this project. Thanks

    Read the article

  • What measures can be taken to increase Google indexing speed for a given newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. How do I get important-page.html indexed as soon as possible after 12:00? Ideally within seconds or minutes. Or more generally: What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • Webcast - C-level Perspectives on How HR Can Take on a Bigger Role in Strategic Planning

    - by Scott Ewart
    The Economist Intelligence Unit (EIU), on behalf of IBM and Oracle, recently surveyed a number of C-level executives in North America and Western Europe to understand how HR can take on a bigger role in driving growth. The resulting reports highlight the actions senior HR leaders can take to place themselves at the heart of the debate on a company's strategic direction.In this session, IBM and Oracle HCM specialists will review the findings of the EIU research reports and provide guidance on how technology innovation can help to align talent strategies with long term business goals. Participants will gain an understanding of the following: Results of the Economist Intelligence Unit study around "Executive Perceptions of the HR Function" Differences in perspective between CEOs and CFOs Identify how the HR professional can take a bigger role in driving business growth Join us on Thursday, October 25 for a live webcast. Speakers:Gina Wells Global Oracle HCM LeaderIBM Global Business Services Michelle NewellSenior Director, HCM Applications MarketingOracle Register Here For the Webcast on Thursday, October 25.

    Read the article

  • How do I morph between meshes that have different vertex counts?

    - by elijaheac
    I am using MeshMorpher from the Unify wiki in my Unity project, and I want to be able to transform between arbitrary meshes. This utility works best when there are an equal number of vertices between the two meshes. Is there some way to equalize the vertex count between a set of meshes? I don't mean that this would reduce the vertex count of a mesh, but would rather add redundant vertices to any meshes with smaller counts. However, if there is an alternate method of handling this (other than increasing vertices), I would like to know.

    Read the article

  • Why is chunk size often a power of two?

    - by danijar
    There are many Minecraft clones out there and I am working on my own implementation. A principle of terrain rendering is tiling the whole world in fixed size chunks to reduce the effort of localized changes. In Minecraft the chunk size is 16 x 16 x 256 as far as I now. And in clones I also always saw chunk sizes of a power of the number 2. Is there any reason for that, maybe performance or memory related? I know that powers of 2 play a special role in binary computers but what has that to do with the chunk size?

    Read the article

  • How to create Windows XP LiveUSB using Ubuntu to replace it

    - by Orion Clark
    I am using an Acer Aspire One netbook with no CD-disk drive, and would like to uninstall Ubuntu 12.04 LTS and install Windows XP in its place. The problem here is that I can't seem to find a program that can put the windows boot files on a USB drive from an ISO file. I have Ubuntu fully installed and have tried using unetbootin. When I tried booting from unetbootin I got a screen with a blue box that had the word "default" in it highlighted. underneath the box there was a countdown that said "will boot from default in 10" after the countdown finished the number would revert to ten and nothing would happen. Can someone tell me another program that would be useful for this please?

    Read the article

  • Using RegEx's in Multi-Channel Funnels in Google Analytics

    - by Rob H
    For some reason, I can't get my multi-channel funnel which utilizes RegEx's in the path steps to function -- it keeps coming back with no data. There are a few variables which may be holding things up, but I can't figure out the origin of the problem, nor a solution. Here's the situation: The funnel is tracking conversions, defined as when a user completes 4 steps to signup Steps are not "required" Default URL is set to https://example.com There is a 302 redirect set up on our site that leads from http://example.com to https://example.com Within the funnel, steps switch from non-secure pages (unless browser is set to secure browsing), to secure pages once the user moves from the landing page to the second page of the sign-up process (account placeholder has been created) URL at that point contains the variable of publisher number within (but not at the end) the URL My RegEx's are all properly written as tested on rubular.com

    Read the article

  • Why ISVs Run Applications on Oracle SuperCluster

    - by Parnian Taidi-Oracle
    Michael Palmeter, Senior Director Product Development of Oracle Engineered Systems, discusses how ISVs can easily run up to 20x faster, gain 28:1 storage compression, and grow presence in the market all without any changes to their code in this short video. One of the family of Oracle engineered systems products, Oracle SuperCluster provides maximum end-to-end database and application performance with minimal initial and ongoing support and maintenance effort, at the lowest total cost of ownership. Java or enterprise applications running on Oracle Database 11gR2 or higher and Oracle Solaris 11 can run up to 20X faster than traditional platforms on Oracle SuperCluster without any changes to their code.  Large number of customers are consolidating hundreds of their applications and databases on Oracle SuperCluster and are requiring their ISVs to support it. ISVs can become Oracle SuperCluster Ready and Oracle SuperCluster Optimized by joining the Oracle Exastack program. 

    Read the article

  • sun.com - SMTP 521

    - by alexismp
    reason: 521 5.0.0 messages are no longer accepted for sun.com It's been planned for a while now - sun.com email addresses are no longer accepted and no longer forwarded to oracle.com. So check you contacts and update old Sun email addresses. While this will probably cut down the spam for a number of us you may need the new stable email address - most Oracle email addresses use the same first.last @ oracle.com pattern (but there are a few homonyms in a company with 100k+ employees). If you need to contact us (TheAquarium), the email address is in the "Contact Us" section on the blog.

    Read the article

  • Identity verification

    - by acjohnson55
    On a site that I'm working on, I'm trying to find ways of enforcing a one-person, one-account rule. In general, we'd like to do this by providing options to authenticate users with third-party services that provide this assurance. For example, it's possible to authenticate with Facebook and check whether the user is considered "verified" by Facebook (which means they must have provided either a phone number or credit card). This is roughly the level of identity verification we require--we're not doing banking or anything like that. But we want to give the user options. My question is, who else, besides Facebook, provides this? (uncertain of the proper SE forum, please comment if there's a better SE site to ask this)

    Read the article

  • How to to let Google know about dynamic content?

    - by Yaniv
    Im looking for the best practice to let Google know about a vast number of dynamically created content. Let's say (I mean - dream) that I'm Facebook, and I want to let Google to index all the users' posts. Sitemap.xml may be the answer for this but they are limited to 50,000 URLs in each site map. I know that I can create 500 sitemaps and create a sitemap for sitemaps, but they are also limited, 25,000,000 URLS sounds quite enough at the moment, but could cause problems in the future. I.E - stackoverflow already has 3 Million posts, probably sitemap is not the solution for them. Creating a page with paging, and links to all the dynamic data. i guess this is what stackoverflow did by creating this page here: http://stackoverflow.com/questions So I think that Option 2 is the answer, but it seems to me that sitemaps might have some added value. So what should i do?

    Read the article

  • Why would a web site keep my signup information for a limited time only?

    - by Alois Mahdal
    I have just created account at (some web service, well, actually it was Transifex, a localization service). Registration form requested typical things: accont name, e-mail adress, password (twice), and, optional company name and phone number. What confused me was this sentence on confirmation page (the one right after submitting the form): We will store your signup information for 7 days on our server. Can anybody explain what does this mean? What exactly they are referring to by "signup information", if it's something that should be kept for only 7 days? Or is my account going to be destroyed after that time? (Well, that could make sense for some special services, but not for this one.)

    Read the article

  • Scrum - how to carry over a partially complete User Story to the next Sprint without skewing the backlog

    - by Nick
    We're using Scrum and occasionally find that we can't quite finish a User Story in the sprint in which it was planned. In true Scrum style, we ship the software anyway and consider including the User Story in the next sprint during the next Sprint Planning session. Given that the User Story we are carrying over is partially complete, how do we estimate for it correctly in the next Sprint Planning session? We have considered: a) Adjusting the number of Story Points down to reflect just the work which remains to complete the User Story. Unfortunately this will mess up reporting the Product Backlog. b) Close the partially-completed User Story and raise a new one to implement the remainder of that feature, which will have fewer Story Points. This will affect our ability to retrospectively see what we didn't complete in that sprint and seems a bit time consuming. c) Not bother with either a or b and continue to guess during Sprint Planning saying things like "Well that User Story may be X story points, but I know it's 95% finished so I'm sure we can fit it in."

    Read the article

  • Slides and Scripts from Metalogix Webcast Master Your SharePoint Migration With PowerShell

    - by Brian Jackett
    Thanks to everyone who attended the Metalogix webcast “Master Your SharePoint Migration with PowerShell” I guest presented on today.  We had great attendance and no technical hitches which is always a plus.  A number of attendees asked for my slide deck which you can find at the link below.  As a bonus I am including a set of demo scripts that I typically use with the longer version of this presentation.  If you have any questions or comments please feel free to reach out to me.  A big thanks once again to Metalogix for giving me the opportunity to work with them. Scripts and Slidedeck Click Here         -Frog Out

    Read the article

  • How do I hire testers by giving them a buggy app for testing their efficiency?

    - by Jay
    My boss wants to recruit testers based on their testing efficiency (number of bugs identified). So, he's shortlisted 5 people and I need to give them an app full of bugs and see how they fare in reporting obvious bugs, and hidden bugs. I know.... it kind of sounds weird. I guess, this is just like the coding world, where you hire a programmer by assessing his/her programming ability (which is a little easier). Once hired, these testers would be testing a java swing app, so their familiarity of testing frameworks/tools is not really required. So, my question here is - How do I go about finding buggy apps (web/non-web), preferably java ones, that I can have the shortlisted testers have a go at? How would you go about this task if your boss asks you to do so? I am kind of clueless at this point - I googled a bit, thought about finding new apps on sourceforge with lots of bugs, but both approaches didn't work for me.

    Read the article

< Previous Page | 390 391 392 393 394 395 396 397 398 399 400 401  | Next Page >