Search Results

Search found 21356 results on 855 pages for 'check digit'.

Page 613/855 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Managing Matrix Relationships: Organization Visualization and Navigation

    - by Nancy Estell Zoder
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle is pleased to announce the posting of our latest feature, Matrix Relationship Administration. Our continued investment in our Organization Visualization and Navigation solution is demonstrated with the release of Matrix Relationship Administration as well as the enhancements made to our Org Viewer capabilities. Some of those enhancements include the ability to export to Excel and Visio, Search, Zoom, as well as the addition of Manager Self Service transactions. Matrix relationships are relationships defined by rules or ad hoc. These relationships can include, but are not limited to, product or project affiliations, functional groups including multi dimensional relationships such as when the product, region or even the customer is the profit center. The PeopleSoft solution will enable you to configure how you work in this multi dimensional world to ensure you have the tools to be productive……. For more information, please check out the datasheet available on oracle.com, video on the feature on YouTube or contact your sales representative.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Devoxx!!

    - by Yolande
    0 0 1 350 2000 Oracle Corporation 16 4 2346 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}  Announcing Devoxx London! Taking place on  March 26th and 27th, 2013 right before Devoxx  France on March 28th and 29th, this will be the first  edition of Devoxx UK!. The call for papers begins  on December 1st for Devoxx in London and Paris.  Speakers will be able to present at the two  conferences in the same week. Oracle committed  to fully sponsor the three Devoxx conferences in  2013 with a platinum sponsorship. Over 5,000 developers are expected to attend those conferences. Five dancing NAO robots welcomed attendees at the keynote. Stephan Janssen offers the JUGs to replicate Devoxx4kids workshops using his content and web infrastructure.  He recommended organizing kid events because “the workshops were really fun and such rewarding experience.” Stephan also announced the redesign of Parleys with Html 5 and GlassFish. Friendlier to speakers, they will be able to post their slides online before their talks and then sync the talk's sound track with the slides. Nandini Ramani, VP of product development explained in her keynote address the growing role of Java from enterprise application development to cloud computing to embedded machine-to-machines systems. “Java continues to drive the applications and devices that enrich our interactivity with the world around us” she said. The Java platform has expanded its reach with the OS X and Linux ARM support on Java SE and with two new releases, Java SE embedded and Java embedded Suite 7.0 middleware platform.  Coming up next year is JDK 8, which will include Project Lambda, Project Nashorn and more. As part of that release, JavaFX will offer 3D and third-party component integration. At Devoxx, the slick and interactive schedules were designed with JavaFX. The earliest version of the Java EE 7 SDK is available for download and has WebSocket support, improved JSON support and more.  Stephen Chin arrived on stage with his bike, ending his European NightHacking tour. Check the hacking sessions online here

    Read the article

  • Set Covering : Runtime hang\error at function call in c

    - by EnthuCrazy
    I am implementing a set covering application which uses cover function int cover(set *skill_list,set *player_list,set *covering) Suppose skill_set={a,b,c,d,e}, player_list={s1,s2,s3} then output coverin ={s1,s3} where say s1={a,b,c}, s3={d,e} and s2={b,d}. Now when I am calling this function it's hanging at run (set_cover.exe stopped working). Here is my cover function: typedef struct Spst_{ void *key; set *st; }Spst; int cover(set *skill_list,set *player_list,set *covering) { Liste *member,*max_member; Spst *subset; set *intersection; void **data; int max_size; set_init(covering); //to initialize set covering initially while(skill_list->size>0&&player_list->size>0) { max_size=0; for(member=player_list->head;member!=NULL;member=member->next) { if(set_intersection(intersection,((Spst *)(member->data))->st,skill_list)!=0) return -1; if(intersection->size>max_size) { max_member=member; max_size=intersection->size; } set_destroy(intersection); //at the end of iteration } if(max_size==0) //to check for no covering return -1; subset=(Spst *)max_member->data; //to insert max subset from play list to covering set set_inselem(covering,subset); for(member=(((Spst *)max_member->data)->st->head);member!=NULL;member=member->next) //to rem elem from skill list { data=(void **)member->data; set_remelem(skill_list,data); } set_remelem(player_list,(void **)subset); //to rem subset from set of subsets play list } if(skill_list->size>0) return -1; return 0; } Now assuming I have defined three set type sets(as stated above) and calling from main as cover(skills,subsets,covering);=> runtime hang Here Please give inputs on the missing link in this or the prerequisites for a proper call to this function type required. EDIT: Assume other functions used in cover are tested and working fine.

    Read the article

  • How to fix 'grub error file not found' when installing 12.04?

    - by Tomasz Grabowski
    i'm trying to install Ubuntu. I don't know if it is important, but i'm trying to install it on external HDD. In the end i have external bootable HDD which only displays: error: file not found grub recovery> From the beginning: I've downloaded ubuntu-12.04-desktop-i386.iso I've used LiLi USB Creator (LinuxLive) to create bootable pendrive from that image I've bootet from it, it works I've clicked "Try ubuntu", it works too. I've used GParted to look over drivers (disks) My primary embedded disk is seen as /dev/sda My attached external disk as /dev/sdb My PenDrive as /dev/sdc I've created partitions on /dev/sdb Fist partition for system (over 200GiB) Second was there already (it's xsf, and i don't want to touch it :P) Third is extended partition, with 1 locital partiton (10GiB) for swap I've started installation i've choose "somethin else" in ... i belive secound screeb then is selected /dev/sdb as boot disk for first partiton of /dev/sdb i set i want ext3 file system, i've check "formattin" checkbox, and mount path set to "/" firs logical partiton set as swap partition After installation finished, i restarted my computer. When i boot from my primary disc it's work ok, my previous operating system - vista - works ok. When i set my BIOS to boot from my external disc, i only get that message: error: file not found grub recovery> I've try to reinstall it, but didn't help... In desperation, i've try to read a bit about that "grub recovery" command-line and experiment a bit... I'm not sure if this has had any point, or if it give you some information (notice, that i don't know what i'm doing :P ) when i've type command: insmod (hd1,1)/boot/grub/linux.mod i've get message: unknown filesystem the same with: insmod (hd1,msdos1)/boot/grub/linux.mod the same with: insmod ext3 but i get no message after command: insmod ext2 ... notice that i really don't know what this command exactly do, but than i thought that maybe if i reinstall ubuntu with ext2 filesystem, it will work. I've done that, but symptoms are the same. I've go back to that Live version of ubuntu, filesystem and basics directories seems to be present on /dev/sdb1 ... i'm completely unfamiliar with GRUB. I'm also don't know which wersion of GRUB it is, i hope there is only one version on ubuntu-12.04-desktop-i386.iso Any help? Thax

    Read the article

  • SQL Azure and Trust Services

    - by BuckWoody
    Microsoft is working on a new Windows Azure service called “Trust Services”. Trust Services takes a certificate you upload and uses it to encrypt and decrypt sensitive data in the cloud. Of course, like any security service, there’s a bit more to it than that. I’ll give you a quick overview of how you can use this product to protect data you send to SQL Azure. The primary issue with storing data in the cloud is that you are in an environment that isn’t under your control – in fact, that’s the benefit of being in a distributed computing environment in the first place. On premises you’re able to encrypt data you don’t want anyone else to see, using various methods such as passwords (not very strong) or certificates (stronger). When you use a certificate, it’s vital that you create (or procure) and protect it yourself. When you store data remotely, regardless of IaaS, PaaS or SaaS, you don’t own the machines where the data lives. That means if you use a certificate from the cloud vendor to encrypt the data, you have to trust that the data won’t be accessed by the vendor. In some cases having a signed agreement with the vendor that they won’t access your data is sufficient, in other cases that doesn’t meet the requirements your system has for security. With the new Trust Services service, the basic process is that you use a Portal to create a Trust Server using policies and other controls. You place a X.509 Certificate you create or procure in that server. Using the Software development Kit (SDK), the developer has access to an Application Layer Encryption Framework to set fields of data they want to encrypt. From there, the data can be stored in SQL Azure as a standard field – only it is encrypted before it ever arrives. The portion of the client software that decrypts the data uses the same service, so the authenticated user sees the data if they are allowed to do so. The data remains encrypted “at rest”.  You can learn more about this product and check it out in the SQL Azure labs at Microsoft Codename "Trust Services"

    Read the article

  • No sound after clean install 11.10

    - by Jorge
    First of all, sorry to ask this, I'm sure that this was asked so many times before. Second, sorry for the English, it's not my native language. And Third, thank you in advance. So, I hope the follow info will help, here's a log. http://www.alsa-project.org/db/?f=07089caf530494bc4bc23e1d1cd56b3a5fae03c6 I already check 'System - Preferences - Sound'. Here's a screenshot http://i.imgur.com/Ghwnj.png > jorge@jorge-desktop:~$ sudo lshw -class multimedia > *-multimedia > description: Multimedia audio controller > product: VT8233/A/8235/8237 AC97 Audio Controller > vendor: VIA Technologies, Inc. > physical id: 11.5 > bus info: pci@0000:00:11.5 > version: 60 > width: 32 bits > clock: 33MHz > capabilities: pm cap_list > configuration: driver=VIA 82xx Audio latency=0 > resources: irq:22 ioport:e400(size=256) Tried with no results: > sudo apt-get remove --purge alsa-base > sudo apt-get remove --purge pulseaudio > sudo apt-get clean && sudo apt-get autoremove > sudo apt-get install alsa-base > sudo apt-get install pulseaudio > sudo apt-get install ubuntu-desktop Also > sudo gedit /etc/default/grub > > from: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" > > to: > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash radeon.audio=1" > > sudo update-grub > > And Reboot... without any result. EDIT: I made sure that everything it's fine with aplay -l and lspci -v and lsmod; and checked alsamixer, it's not in mute. Well I'm running out of ideas. Thanks.

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Build Dependencies and Silverlight 4

    - by Kyle Burns
    At my current position, I’ve been doing quite a bit of Silverlight development and have also been working with TFS2010 build services to enable continuous integration.  One of the critical pieces of a successful continuous build setup (and also one of the benefits of having one) is that the build system should be able to “get latest” against the source repository and immediately build with no errors.  This can break down both in an automated build scenario and a “new guy” scenario when the solution has external dependencies that may not be present in the build environment. The method that I use to address the dependency issue is to store all of the binaries upon which my solution depends in a folder under the solution root called “Reference Items”.  I keep this folder as part of the solution and check all of the binaries into source control so when I get the latest version of the solution from source control all of the binaries are downloaded to my machine as well and gets me closer to the ideal where a new developer installs the development IDE, get latest and can immediately build and run unit tests before jumping into coding the feature of the day. This all sounds pretty good (and it is), but a little while back I ran into one of those little hiccups that requires a little manual intervention.  The issue that I ran into is that with Silverlight (at least version 4), the behavior of the “Add Reference” command when adding reference to a DLL that is present in the GAC is to omit the HintPath element that it includes with regular .Net projects, so even if the DLL is setting in the Reference Items folder and downloaded to the build machine it cannot be found at compile time and the build will fail. To work around this behavior, you need to be comfortable editing the XML project files generated by Visual Studio (in my case this is typically a .csproj file).  Simply open the project file in your favorite text editor, find the Reference element that refers to the component, and modify the XML to include the HintPath.  Here’s a before and after example of the component that ultimately led me to the investigation behind this post: Before: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL" /> After: <Reference Include="Telerik.Windows.Controls, Version=2011.2.920.1040, Culture=neutral, PublicKeyToken=5803cfa389c90ce7, processorArchitecture=MSIL">       <HintPath>..\Reference Items\Telerik.Windows.Controls.dll</HintPath>     </Reference>

    Read the article

  • What can a Service do on Windows?

    - by Akemi Iwaya
    If you open up Task Manager or Process Explorer on your system, you will see many services running. But how much of an impact can a service have on your system, especially if it is ‘corrupted’ by malware? Today’s SuperUser Q&A post has the answers to a curious reader’s questions. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Forivin wants to know how much impact a service can have on a Windows system, especially if it is ‘corrupted’ by malware: What kind malware/spyware could someone put into a service that does not have its own process on Windows? I mean services that use svchost.exe for example, like this: Could a service spy on my keyboard input? Take screenshots? Send and/or receive data over the internet? Infect other processes or files? Delete files? Kill processes? How much impact could a service have on a Windows installation? Are there any limits to what a malware ‘corrupted’ service could do? The Answer SuperUser contributor Keltari has the answer for us: What is a service? A service is an application, no more, no less. The advantage is that a service can run without a user session. This allows things like databases, backups, the ability to login, etc. to run when needed and without a user logged in. What is svchost? According to Microsoft: “svchost.exe is a generic host process name for services that run from dynamic-link libraries”. Could we have that in English please? Some time ago, Microsoft started moving all of the functionality from internal Windows services into .dll files instead of .exe files. From a programming perspective, this makes more sense for reusability…but the problem is that you can not launch a .dll file directly from Windows, it has to be loaded up from a running executable (exe). Thus the svchost.exe process was born. So, essentially a service which uses svchost is just calling a .dll and can do pretty much anything with the right credentials and/or permissions. If I remember correctly, there are viruses and other malware that do hide behind the svchost process, or name the executable svchost.exe to avoid detection. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • 2010 is gone and Welcome 2011

    - by anirudha
    last days i spent my week @ firozabadthe town is much small and near to agraso i never forget to see the taj mahal and red fort their even it’s first chance to see them.i make a plan that i go to Agra last Saturday. firstly i go to red fort and i talking with many foreigner and they love to talking with me because their is only one man who with with them who is their GUIDE a person like a  book they never can talk with you but tell you about everything of the location because you buy them. their are many person come from various country such as German , Japan,  Russ , Italy and many other. their is no problem to talk with them perhaps they happen with talk to me. when i completely watch the Red fort at least i see a girl who are look like a foreigner. i talk themselves where they come from they tell me Francewhen i go elsewhere i thing to propose them to be  a friend of mine. i never propose any girl for friendship with me even in school and college. so i propose them to be a friend of mine.  they accept it i put the email ID in their hand whenever they gone. but i still not get their mail. 2ndly i go to Taj mahal the taj experience is not so good i spent 3 or 4 hours in rush. i found their is no security even their are many army force. they all person are too slow to work. they spent 10 minute to check  a person for security . their hands work very slow just like a low configuration computer. i talk many person their too. i talk to a person who tell themselves Jacob and they from Chicago. they speak very fast and i not know what they tell in speech. a another problem i got with some Chinese person. when i talking with them that i found they speak only Chinese language. Wish you a very very happy new year.

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • RPi and Java Embedded GPIO: Sensor Connections for Java Enabled Interface

    - by hinkmond
    Now we're ready to connect the hardware needed to make a static electricity sensor for the Raspberry Pi and use Java code to access it through a GPIO port. First, very carefully bend the NTE312 (or MPF-102) transistor "gate" pin (see the diagram on the back of the package or refer to the pin diagram on the Web). You can see it in the inset photo on the bottom left corner. I bent the leftmost pin of the NTE312 transistor as I held the flat part toward me. That is going to be your antenna. So, connect one of the jumper wires to the bent pin. I used the dark green jumper wire (looks almost black; coiled at the bottom) in the photo. Then push the other 2 pins of the transistor into your breadboard. Connect one of the pins to Pin # 1 (3.3V) on the GPIO header of your RPi. See the diagram if you need to glance back at it. In the photo, that's the orange jumper wire. And connect the final unconnected transistor pin to Pin # 22 (GPIO25) on the RPi header. That's the blue jumper wire in my photo. For reference, connect the LED anode (long pin on a common anode LED/short pin on a common cathode LED, check your LED pin diagram) to the same breadboard hole that is connecting to Pin # 22 (same row of holes where the blue wire is connected), and connect the other pin of the LED to GROUND (row of holes that connect to the black wire in the photo). Test by blowing up a balloon, rubbing it on your hair (or your co-worker's hair, if you are hair-challenged) to statically charge it, and bringing it near your antenna (green wire in the photo). The LED should light up when it's near and go off when you pull it away. If you need more static charge, find a co-worker with really long hair, or rub the balloon on a piece of silk (which is just as good but not as fun). Next blog post is where we do some Java coding to access this sensor on your RPi. Finally, back to software! Ha! Hinkmond

    Read the article

  • Backup Azure Tables with the Enzo Backup API

    - by Herve Roggero
    In case you missed it, you can now backup (and restore) Azure Tables and SQL Databases using an API directly. The features available through the API can be found here: http://www.bluesyntax.net/backup20api.aspx and the online help for the API is here: http://www.bluesyntax.net/EnzoCloudBackup20/APIIntro.aspx. Backing up Azure Tables can’t be any easier than with the Enzo Backup API. Here is a sample code that does the trick: // Create the backup helper class. The constructor automatically sets the SourceStorageAccount property StorageBackupHelper backup = new StorageBackupHelper("storageaccountname", "storageaccountkey", "sourceStorageaccountname", "sourceStorageaccountkey", true, "apilicensekey"); // Now set some properties… backup.UseCloudAgent = false;                                       // backup locally backup.DeviceURI = @"c:\TMP\azuretablebackup.bkp";    // to this file backup.Override = true; backup.Location = DeviceLocation.LocalFile; // Set optional performance options backup.PKTableStrategy.Mode = BSC.Backup.API.TableStrategyMode.GUID; // Set GUID strategy by default backup.MaxRESTPerSec = 200; // Attempt to stay below 200 REST calls per second // Start the backup now… string taskId = backup.Backup(); // Use the Environment class to get the final status of the operation EnvironmentHelper env = new EnvironmentHelper("storageaccountname", "storageaccountkey", "apilicensekey"); string status = env.GetOperationStatus(taskId);   As you can see above, the code is straightforward. You provide connection settings in the constructor, set a few options indicating where the backup device will be located, set optional performance parameters and start the backup. The performance options are designed to help you backup your Azure Tables quickly, while attempting to keep under a specific threshold to prevent Storage Account throttling. For example, the MaxRESTPerSec property will attempt to keep the overall backup operation under 200 rest calls per second. Another performance option if the Backup Strategy for Azure Tables. By default, all tables are simply scanned. While this works best for smaller Azure Tables, larger tables can use the GUID strategy, which will issue requests against an Azure Table in parallel assuming the PartitionKey stores GUID values. It doesn’t mean that your PartitionKey must have GUIDs however for this strategy to work; but the backup algorithm is tuned for this condition. Other options are available as well, such as filtering which columns, entities or tables are being backed up. Check out more on the Blue Syntax website at http://www.bluesyntax.net.

    Read the article

  • What if you could work on anything you wanted?

    - by red@work
    This week we've downed our tools and organised ourselves into small project teams or struck out alone. We're working on whatever we like, with whoever we like, wherever we like. We've called it Down Tools week and so far it's a blast. It all started a few months ago with an idea from Neil, our CEO. Neil wanted to capture the excitement, innovation, and productivity of Coding by the Sea and extend this to all Red Gaters working in Product Development. A brainstorm is always a good place to start for an "anything goes" project. Half of Red Gate piled into our largest meeting room (it's pretty big) armed with flip charts, post its and a heightened sense of possibility. An hour or so later our SQL Servery walls were covered in project ideas. So what would you do, if you could work on anything you wanted? Many projects are related to tools we already make, others are for internal product development use and some are, well, just something completely different. Someone suggested we point a web cam at the SQL Servery lunch queue so we can check it before heading to lunch. That one couldn't wait for Down Tools Week. It was up and running within a few days and even better, it captures the table tennis table too. Thursday is the Show and Tell - I am looking forward to seeing what everyone has come up with. Some of the projects will turn into new products or features so this probably isn't the time or place to go into detail of what is being worked on. Rest assured, you'll hear all about it! We're making a video as we go along too which will be up on our website as soon. In the meantime, all meetings are cancelled, we've got plenty of food in and people are being very creative with the £500 expenses budget (Richard, do you really need an iPad?). It's brilliant to see it all coming together from the idea stage to reality. Catch up with our progress by following #downtoolsweek on Twitter. Who knows, maybe a future Red Gate flagship tool is coming to life right now? By the way, it's business as usual for our customer facing and internal operations teams. Hmm, maybe we can all down tools for a week and ask Product Development to hold the fort? Post by: Alice Chapman

    Read the article

  • A Facelift for Fusion

    - by Richard Lefebvre
    It's simple. It's modern. It was the buzz at OpenWorld in San Francisco. See what the UX team has been up to and what customers are going to love. At OpenWorld 2012, the Oracle Applications User Experience (UX) team unveiled the new face of Fusion Applications. You might have seen it in sessions presented by Chris Leone, Anthony Lye, Jeremy Ashley and others or you may have gotten a look on the demogrounds. Why are we delivering a new face for Fusion Applications? "Because," says Ashley, vice president of the Oracle Applications User Experience team, "we want to provide a simple, modern, productive way for users to complete their top quick-entry tasks. The idea is to provide a clear, productive user experience that is backed by the full functionality of Fusion Applications." The first release of the new face of Fusion focuses on three types of users. It provides a fully functional gateway to Fusion Applications for: ·         New and casual users who need quick access to self-service tasks ·         Professional users who need fast access to quick-entry, high-volume tasks ·         Users who are looking for a way to quickly brand their portal for employees The new face of Fusion allows users to move easily from navigation to action, Ashley said, and it has been designed for any device -- Mac, PC, iPad, Android, SmartBoard -- in the browser. How Did We Build It? The new face of Fusion essentially is a custom shell, developed by the Apps UX team, and a set of page templates that embodies a simple design aesthetic. It's repeatable, providing consistency across its pages, and the need for training is little to zero. More specifically, the new face of Fusion has been built on ADF. The Applications UX team created pages in JDeveloper using local tasks flows bound to existing view objects. Three new components were commissioned from ADF and existing Fusion components were re-skinned to deliver a simple, modern user experience. It really is that simple - and to prove that point, we've been sharing our new face of Fusion story on several Oracle channels such as this one. If you want to learn more, check OpenWorld presentation on the Fusion Learning Center.

    Read the article

  • What's the best way to install the GD graphics library for Nagios?

    - by user1196
    While trying to install Nagios 3.2.3, I ran their ./configure script and got these errors: checking for main in -liconv... no checking for gdImagePng in -lgd (order 1)... no checking for gdImagePng in -lgd (order 2)... no checking for gdImagePng in -lgd (order 3)... no checking for gdImagePng in -lgd (order 4)... no *** GD, PNG, and/or JPEG libraries could not be located... ********* Boutell's GD library is required to compile the statusmap, trends and histogram CGIs. Get it from http://www.boutell.com/gd/, compile it, and use the --with-gd-lib and --with-gd-inc arguments to specify the locations of the GD library and include files. NOTE: In addition to the gd-devel library, you'll also need to make sure you have the png-devel and jpeg-devel libraries installed on your system. NOTE: After you install the necessary libraries on your system: 1. Make sure /etc/ld.so.conf has an entry for the directory in which the GD, PNG, and JPEG libraries are installed. 2. Run 'ldconfig' to update the run-time linker options. 3. Run 'make clean' in the Nagios distribution to clean out any old references to your previous compile. 4. Rerun the configure script. NOTE: If you can't get the configure script to recognize the GD libs on your system, get over it and move on to other things. The CGIs that use the GD libs are just a small part of the entire Nagios package. Get everything else working first and then revisit the problem. Make sure to check the nagios-users mailing list archives for possible solutions to GD library problems when you resume your troubleshooting. ******************************************************************** Which package do I want? libgd2-xpm-dev? libgd2-noxpm-dev? php5-gd? I'm not looking to do any image processing myself - I just want to get Nagios working.

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-21

    - by Bob Rhubart
    The Real Architects of Los Angeles: OTN Architect Day in LA - Oct 25 No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. For a broader discussion of the importance of IT governance check out the lastest OTN Archbeat Podcast: By Any Other Name: Governance and Architecture How I Simplified the Installation of Oracle Database on Oracle Linux 6 Michele Casey's update of Ginny Henningsen's original article. This version shows how to simplify the installation of Oracle Database 11g on Oracle Linux 6 by installing the oracle-rdbms-server-11gR2-preinstall RPM package. Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. BPM ADF Task forms. Checking whether the current user is in a BPM Swimlane | Christopher Karl Chan "The tricky part here is that the ADF Task Details Form is in fact part of a separate J2EE application to the main workspace," says Oracle Fusion Middleware A-Team member Christopher Karl Chan. "So if you try to use Java or Expression Language to get the logged in user you will only find anonymous and none of the BPM Roles you will be expecting. So what to do?" Don't guess—read the post and find out. Thought for the Day "The speed of change makes you wonder what will become of architecture. " — Tadao Ando Source: BrainyQuote

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • Mobile Shopping Alerts

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} It’s been popular to offer coupons when people check-in to a store, because you’re catching them at the best possible time – they’re presumably in a shopping state-of-mind, and they’re at your store.  But wouldn’t it be even better to catch the people walking by your store and entice them to visit?  That’s the concept of geo-fences.  When people enter a geographic zone, they are sent a relevant text message alerting them about something nearby. I wrote about Placecast doing this for The North Face, noting that the messages were a unique combination of both offers and useful information about outdoor activities. After creating a program with European carrier O2, Placecast recently entered into an agreement to provide similar services to AT&T customers.  The ShopAlerts program allows AT&T customers in Chicago, Los Angeles, New York City, and San Francisco opt-in to receive these messages.  The program will be expanded nationwide as early as this summer. It’s a much better model for customers (and Placecast) to sign-up once with the carrier instead of each individual retailer, but I hope the messages aren’t restricted to advertising.  I really the like the idea of providing other information, such as nearby special events, races, and perhaps even things to avoid like construction.

    Read the article

  • I need to move an entity to the mouse location after i rightclick

    - by I.Hristov
    Well I've read the related questions-answers but still cant find a way to move my champion to the mouse position after a right-button mouse-click. I use this code at the top: float speed = (float)1/3; And this is in my void Update: //check if right mouse button is clicked if (mouse.RightButton == ButtonState.Released && previousButtonState == ButtonState.Pressed) { // gets the position of the mouse in mousePosition mousePosition = new Vector2(mouse.X, mouse.Y); //gets the current position of champion (the drawRectangle) currentChampionPosition = new Vector2(drawRectangle.X, drawRectangle.Y); // move champion to mouse position: //handles the case when the mouse position is really close to current position if (Math.Abs(currentChampionPosition.X - mousePosition.X) <= speed && Math.Abs(currentChampionPosition.Y - mousePosition.Y) <= speed) { drawRectangle.X = (int)mousePosition.X; drawRectangle.Y = (int)mousePosition.Y; } else if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)((mousePosition.X - currentChampionPosition.X) * speed); drawRectangle.Y += (int)((mousePosition.Y - currentChampionPosition.Y) * speed); } } previousButtonState = mouse.RightButton; What that code does at the moment is on a click it brings the sprite 1/3 of the distance to the mouse but only once. How do I make it move consistently all the time? It seems I am not updating the sprite at all. EDIT I added the Vector2 as Nick said and with speed changed to 50 it should be OK. I tried it with if ButtonState.Pressed and it works while pressing the button. Thanks. However I wanted it to start moving when single mouse clicked. It should be moving until reaches the mousePosition. The Edit of Nick's post says to create another Vector2, But I already have the one called mousePosition. Not sure how to use another one. //gets a Vector2 direction to move *by Nick Wilson Vector2 direction = mousePosition - currentChampionPosition; //make the direction vector a unit vector direction.Normalize(); //multiply with speed (number of pixels) direction *= speed; // move champion to mouse position if (currentChampionPosition != mousePosition) { drawRectangle.X += (int)(direction.X); drawRectangle.Y += (int)(direction.Y); } } previousButtonState = mouse.RightButton;

    Read the article

  • New computer hangs on shutdown/reboot, how to troubleshoot?

    - by torbengb
    Summary: My machine hangs on shutdown/restart: all windows and the menu bar disappear but the desktop wallpaper remains, and it stays like that without disk activity forever (hours). It doesn't even show the shutdown screen (the one with the animated dots) where I could hit ESC and watch the shutdown text. How can I troubleshoot this? Details: I've just received a new nettop computer (Acer Aspire Revo 3700: CPU:Atom D525, GPU:Nvidia ION2). I've just made a clean install of Ubuntu 10.10 using the standard USB pendrive method. The machine boots okay and works OK including WLAN and audio, but the graphics are not OK. Ubuntu offered to install&activate the current recommended Nvidia driver, but the machine hangs on shutdown/restart which prevents the installation of the proper Nvidia driver. I have to cycle the power to reboot. I ran the Update Manager in the hope that the updates would fix the hang-up. At the end of the update-installation it asked to reboot - and got stuck just like before. I see no obvious cause of the freeze and I don't know if it's caused by graphics problems or anything else. The only USB attachment is a mouse/keyboard; I don't have any external storage attached; and I don't have any programs running (the machine freezes even when doing restart right after login). How can I determine what is causing the freeze? How can I fix this? I'm frankly rather disappointed because I bought this new machine in the hopes of getting the graphics to work, which failed miserably on my old machine, even though Ubuntu is supposed to be good with Nvidia. Being a fresh convert from Windows, I was hoping for a happier experience this time, so I'm very much looking forward to your suggestions! ... After posting this question, I see related questions in the right sidebar: this, this, and this. Don't know why these didn't show up while I composed by question. Those questions suggest some ACPI settings but I am not experienced enough to find/change those settings. I'll try the sudo shutdown -h now command when I get home and see if that works, then update this question. I did check the system BIOS but didn't see anything out of the ordinary.

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >