Search Results

Search found 25013 results on 1001 pages for 'reset the controls within'.

Page 567/1001 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Communities - The importance of exchange and discussion

    Communication with your environment is an essential part of everyone's life. And it doesn't matter whether you are actually living in a rural area in the middle of nowhere, within the pulsating heart of a big city, or in my case on a wonderful island in the Indian Ocean. The ability to exchange your thoughts, your experience and your worries with another person helps you to get different points of view and new ideas on how to resolve an issue you might be confronted with. Benefits of community work What happens to be common sense in your daily life, also applies to your work environment. Working in IT, or ICT as it is called in Mauritius, requires a lot of reading and learning. Not only during your lectures at the university but with your colleagues in a project assignment and hopefully with 'unknown' pals in the universe of online communities. At least I can say that I learned quite a lot from other developers code, their responses in various forums, their numerous blog articles, and while attending local user group meetings. When I started to work as a professional software developer (or engineer some may say) years ago I immediately checked the existence of communities on the programming language, the database technology and other vital information on software development in general. Luckily, it wasn't too difficult to find. My employer had a subscription of the monthly magazines and newsletters of a national organisation which also run the biggest forum in that area. Getting in touch with other developers and reading their common problems but also solutions was a huge benefit to my growth. Image courtesy of Michael Kappel (CC BY-NC 2.0) Active participation and regular contribution to this community gave me some nice advantages, too. Within three years I was listed as a conference speaker at the annual developer's conference and provided several sessions on different topics during consecutive years. Back in 2004, I took over the responsibility and management of the monthly meetings of a regional user group, and organised it for more than two years. Furthermore, I was invited to the newly-founded community program of Microsoft Germany (Community Leader/Insider Program - CLIP). My website on Active FoxPro Pages was nominated in the second batch of online communities. Due to my community work and providing advice to others, I had the honour to be awarded as Microsoft Most Valuable Professional (MVP) - Visual Developer for Visual FoxPro in the years 2006 and 2007. It was a great experience to meet with other like-minded people and I'm really grateful for that. Just in case, more details are listed in my Curriculum Vitae. But this all changed when I moved to Mauritius... Cyber island Mauritius? During the first months in Mauritius I was way too busy to think about community activities at all. First of all, there was the new company that had to be set up, the new staff had to be trained and of course the communication work-flows and so on with the project managers back in Germany had to be sorted out, too. Second, I had to get a grip of my private matters like getting the basics for my new household or exploring the neighbourhood, and last but not least I needed a break from the hectic and intensive work prior to my departure. As soon as the sea literally calmed down, I started to have conversations with my colleagues about communities and user groups. Sadly, it turned out that there were none, or at least no one was aware of any at that time. Oh oh, what did I do? Anyway, having this kind of background and very positive experience with off-line and on-line activities I decided for myself that some day I'm going to found a community in Mauritius for all kind of IT/ICT-related fields. The main focus might be on software development but not on a certain technology or methodology. It was clear to me that it should be an open infrastructure and anyone is welcome to join, to experience, to share and to contribute if they would like to. That was the idea at that time... Ok, fast-forward to recent events. At the end of October 2012 I was invited to an event called Open Days organised by Microsoft Indian Ocean Islands together with other local partners and resellers. There I got in touch with local Technical Evangelist Arnaud Meslier and we had a good conversation on communities during the breaks. Eventually, I left a good impression on him, as we are having chats on Facebook or Skype irregularly. Well, seeing that my personal and professional surroundings have been settled and running smooth, having that great exchange and contact with Microsoft IOI (again), and being really eager to re-animate my intentions from 2007, I recently founded a new community: Mauritius Software Craftsmanship Community - #MSCC It took me a while to settle down with the name but it was obvious that the community should not be attached to one single technology, like ie. .NET user group, Oracle developers, or Joomla friends (these are fictitious names). There are several other reasons why I came up with 'Craftsmanship' as the core topic of this community. The expression of 'engineering' didn't feel right with the fields covered. Software development in all kind of facets is a craft, and therefore demands a lot of practice but also guidance from more experienced developers. It also includes the process of designing, modelling and drafting the ideas. Has to deal with various types of tests and test methodologies, and of course should be focused on flexible and agile ways of acting. In order to meet and to excel a customer's request for a solution. Next, I was looking for an easy way to handle the organisation of events and meeting appointments. Using all kind of social media platforms like Google+, LinkedIn, Facebook, Xing, etc. I was never really confident about their features of event handling. More by chance I stumbled upon Meetup.com and in combination with the other entities (G+ Communities, FB Pages or in Groups) I am looking forward to advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there will be a Trello workspace to collect and share ideas and provide downloads of slides, etc. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians!

    Read the article

  • How To: Using spatial data with Entity Framework and Connector/Net

    - by GABMARTINEZ
    One of the new features introduced in Entity Framework 5.0 is the incorporation of some new types of data within an Entity Data Model: the spatial data types. These types allow us to perform operations on coordinates values in an easier way. There's no need to add stored routines or functions for every operation among these geometry types, now the user can have the alternative to put this logic on his application or keep it in the database. In the new 6.7.4 version there's also this new feature incorporated to Connector/Net library so our users can start exploring it and could provide us some feedback or comments about this new functionality. Through this tutorial on how to create a Code First Entity Model with a geometry column, we'll show an example on using Geometry types and some common operations when using geometry types inside an application. Requirements: - Connector/Net 6.7.4 - Entity Framework 5.0 version - .NET Framework 4.5 version - Basic understanding on Entity Framework and C# language. - An installed and running instance of MySQL Server 5.5.x or 5.6.10 version- Visual Studio 2012. Step One: Create a new Console Application  Inside Visual Studio select File->New Project menu option and select the Console Application template. Also make sure the .Net 4.5 version is selected so the new features for EF 5.0 will work with the application. Step Two: Add the Entity Framework Package For adding the Entity Framework Package there is more than one option: the package manager console or the Manage Nuget Packages option dialog. If you want to open the Package Manager Console, go to the Tools Menu -> Library Package Manager -> Package Manager Console. On the Package Manager Console Type:Install-Package EntityFrameworkThis will add the reference to the project of the latest released No alpha version of Entity Framework. Step Three: Adding Entity class and DBContext We'll add a simple class that represents a table entity to save some places and its location using a DBGeometry column that will be mapped to a Geometry type in MySQL. After that some operations can be performed using this data. public class MyPlace { [Key] public int Id { get; set; } public string name { get; set; } public DbGeometry location { get; set; } } public class JourneyDb : DbContext { public DbSet<MyPlace> MyPlaces { get; set; } }  Also make sure to add the connection string to the App.Config file as in the example: <?xml version="1.0" encoding="utf-8"?> <configuration>   <configSections>     <!-- For more information on Entity Framework configuration, visit http://go.microsoft.com/fwlink/?LinkID=237468 -->     <section name="entityFramework" type="System.Data.Entity.Internal.ConfigFile.EntityFrameworkSection, EntityFramework, Version=5.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" requirePermission="false" />   </configSections>   <startup>     <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />   </startup>   <connectionStrings>     <add name="JourneyDb" connectionString="server=localhost;userid=root;pwd=;database=journeydb" providerName="MySql.Data.MySqlClient"/>   </connectionStrings>   <entityFramework>     </entityFramework> </configuration> Note also that the <entityFramework> section is empty.Step Four: Adding some new records.On the Program.cs file add the following code for the Main method so the Database gets created and also some new data can be added to the new table. This code adds some records containing some determinate locations. After being added a distance function will be used to know how much distance has each location in reference to the Queens Village Station in New York. static void Main(string[] args)    {     using (JourneyDb cxt = new JourneyDb())      {        cxt.Database.Delete();        cxt.Database.Create();         cxt.MyPlaces.Add(new MyPlace()        {          name = "JFK INTERNATIONAL AIRPORT OF NEW YORK",          location = DbGeometry.FromText("POINT(40.644047 -73.782291)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "ALLEY POND PARK",          location = DbGeometry.FromText("POINT(40.745696 -73.742638)"),        });       cxt.MyPlaces.Add(new MyPlace()        {          name = "CUNNINGHAM PARK",          location = DbGeometry.FromText("POINT(40.735031 -73.768387)"),        });         cxt.MyPlaces.Add(new MyPlace()        {          name = "QUEENS VILLAGE STATION",          location = DbGeometry.FromText("POINT(40.717957 -73.736501)"),        });         cxt.SaveChanges();         var points = (from p in cxt.MyPlaces                      select new { p.name, p.location });        foreach (var item in points)       {         Console.WriteLine("Location " + item.name + " has a distance in Km from Queens Village Station " + DbGeometry.FromText("POINT(40.717957 -73.736501)").Distance(item.location) * 100);       }       Console.ReadKey();      }  }}Output : Location JFK INTERNATIONAL AIRPORT OF NEW YORK has a distance from Queens Village Station 8.69448802402959 Km. Location ALLEY POND PARK has a distance from Queens Village Station 2.84097675104912 Km. Location CUNNINGHAM PARK has a distance from Queens Village Station 3.61695793727275 Km. Location QUEENS VILLAGE STATION has a distance from Queens Village Station 0 Km. Conclusion:Adding spatial data to a table is easier than before when having Entity Framework 5.0. This new Entity Framework feature that handles spatial data columns within the Data layer has a lot of integrated functions and methods toease this type of tasks.Notes:This version of Connector/Net is just released as GA so is preatty much stable to be used on a ProductionEnvironment. Please send us your comments or questions using this blog or at the Forums where we keep answering any questions you have about Connector/Net and MySQL Server.A copy of this sample project can be downloaded here. This application does not include any library so you will haveto add them before running it. Happly MySQL/.Net Coding.

    Read the article

  • 3D Ball Physics Theory: collision response on ground and against walls?

    - by David
    I'm really struggling to get a strong grasp on how I should be handling collision response in a game engine I'm building around a 3D ball physics concept. Think Monkey Ball as an example of the type of gameplay. I am currently using sphere-to-sphere broad phase, then AABB to OBB testing (the final test I am using right now is one that checks if one of the 8 OBB points crosses the planes of the object it is testing against). This seems to work pretty well, and I am getting back: Plane that object is colliding against (with a point on the plane, the plane's normal, and the exact point of intersection. I've tried what feels like dozens of different high-level strategies for handling these collisions, without any real success. I think my biggest problem is understanding how to handle collisions against walls in the x-y axes (left/right, front/back), which I want to have elasticity, and the ground (z-axis) where I want an elastic reaction if the ball drops down, but then for it to eventually normalize and be kept "on the ground" (not go into the ground, but also not continue bouncing). Without kluging something together, I'm positive there is a good way to handle this, my theories just aren't getting me all the way there. For physics modeling and movement, I am trying to use a Euler based setup with each object maintaining a position (and destination position prior to collision detection), a velocity (which is added onto the position to determine the destination position), and an acceleration (which I use to store any player input being put on the ball, as well as gravity in the z coord). Starting from when I detect a collision, what is a good way to approach the response to get the expected behavior in all cases? Thanks in advance to anyone taking the time to assist... I am grateful for any pointers, and happy to post any additional info or code if it is useful. UPDATE Based on Steve H's and eBusiness' responses below, I have adapted my collision response to what makes a lot more sense now. It was close to right before, but I didn't have all the right pieces together at the right time! I have one problem left to solve, and that is what is causing the floor collision to hit every frame. Here's the collision response code I have now for the ball, then I'll describe the last bit I'm still struggling to understand. // if we are moving in the direction of the plane (against the normal)... if (m_velocity.dot(intersection.plane.normal) <= 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Clamp z-velocity to zero if we are within a certain threshold // -- NOTE: this was an experimental idea I had to solve the "jitter" bug I'll describe below float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position + intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); The above snippet is run after a collision is detected on the ball (collider) with a collidee (floor in this case). With a dampening force of 1.8f, the ball's reflected "upward" velocity will eventually be overcome by gravity, so the ball will essentially be stuck on the floor. THIS is the problem I have now... the collision code is running every frame (since the ball's z-velocity is constantly pushing it a collision with the floor below it). The ball is not technically stuck, I can move it around still, but the movement is really goofy because the velocity and position keep getting affected adversely by the above snippet. I was experimenting with an idea to clamp the z-velocity to zero if it was "close to zero", but this didn't do what I think... probably because the very next frame the ball gets a new gravity acceleration applied to its velocity regardless (which I think is good, right?). Collisions with walls are as they used to be and work very well. It's just this last bit of "stickiness" to deal with. The camera is constantly jittering up and down by extremely small fractions too when the ball is "at rest". I'll keep playing with it... I like puzzles like this, especially when I think I'm close. Any final ideas on what I could be doing wrong here? UPDATE 2 Good news - I discovered I should be subtracting the intersection.diff from the m_position (position prior to collision). The intersection.diff is my calculation of the difference in the vector of position to destPosition from the intersection point to the position. In this case, adding it was causing my ball to always go "up" just a little bit, causing the jitter. By subtracting it, and moving that clamper for the velocity.z when close to zero to being above the dot product (and changing the test from <= 0 to < 0), I now have the following: // Clamp z-velocity to zero if we are within a certain threshold float diff = 0.2f - abs(m_velocity.z); if (diff > 0.0f && diff <= 0.2f) { m_velocity.z = 0.0f; } // if we are moving in the direction of the plane (against the normal)... float dotprod = m_velocity.dot(intersection.plane.normal); if (dotprod < 0.0f) { float dampeningForce = 1.8f; // eventually create this value based on mass and acceleration? // Calculate the projection velocity PVRTVec3 actingVelocity = m_velocity.project(intersection.plane.normal); m_velocity -= actingVelocity * dampeningForce; } // Take this object to its new destination position based on... // -- our pre-collision position + vector to the collision point + our new velocity after collision * time // -- remaining after the collision to finish the movement m_destPosition = m_position - intersection.diff + (m_velocity * intersection.tRemaining * GAMESTATE->dt); UpdateWorldMatrix(m_destWorldMatrix, m_destOBB, m_destPosition, false); This is MUCH better. No jitter, and the ball now "rests" at the floor, while still bouncing off the floor and walls. The ONLY thing left is that the ball is now virtually "stuck". He can move but at a much slower rate, likely because the else of my dot product test is only letting the ball move at a rate multiplied against the tRemaining... I think this is a better solution than I had previously, but still somehow not the right idea. BTW, I'm trying to journal my progress through this problem for anyone else with a similar situation - hopefully it will serve as some help, as many similar posts have for me over the years.

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox

    - by The Old Toxophilist
    Following on from my previous blog entry "Modifying the Base Template" I decided to put together a quick blog to show how to create a small VirtualBox, guest, that can be used to execute the ModifyJeOS and hence edit you templates. One of the main advantages of this is that Templates can be created away from the Exalogic Environment. For the Guest OS I chose OEL 6u3 and decided to create it as a basic server because I did not require a graphical interface but it's a simple change to create it with a GUI. Required Software Virtual Box. Oracle Enterprise Linux. Creating the VM I'll assume that the reader is experienced with Virtual Box and installing OEL and hence will make this section brief. Create VirtualBox Guest Create a new VirtualBox Guest and select oracle Linux 64 bit. Follow through the create process and select Dynamic Disk Size and the default 12GB disk size. The actual image will be a lot smaller than this but the OEL install will fail with insufficient disk space if you attempt a smaller size. Once the guest has been created attach the previously downloaded OEL 6u3 iso to the cd drive and start the guest. Install OEL On starting the guest the system will boot off the associated OEL 6u3 iso and take you through the standard installation process. Select all the appropriate information but when you reach the installation type select Basic Server because we do not need that additional packages and only need to access through the command line interface. Complete the installation and reboot the Guest. At this point we now have a basic OEL server running. Installing Guest Add-ons Before we can easily access the Guest we will need to add the VirtualBox guest add-ons. These will provide better keyboard and mouse integration and allow access the shared folders on the host machine. Before we can do this we will need to do the following: Enable Networking. Install additional rpms.  To enable the networking (eth0), that appears to be disabled by default, we can execute: ifup eth0 This will start the eth0 connection but once the Guest is rebooted the network will be down again. To resolve this you will need to edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file and change the ONBOOT parameter to "yes". Now we have enabled the network we will need to install a number of addition rpm. First we will need to configure the yum repository as follows: [ol6_latest] name=Oracle Linux $releasever Latest ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_ga_base] name=Oracle Linux $releasever GA installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u1_base] name=Oracle Linux $releasever Update 1 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/1/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u2_base] name=Oracle Linux $releasever Update 2 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/2/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_u3_base] name=Oracle Linux $releasever Update 3 installation media copy ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/3/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 [ol6_UEK_latest] name=Latest Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/latest/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=1 [ol6_UEK_base] name=Unbreakable Enterprise Kernel for Oracle Linux $releasever ($basearch) baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/UEK/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 gpgcheck=1 enabled=0 Once the repository has been edited we will need to execute the following yum commands: yum update yum install gcc yum install kernel-uek-devel yum install kernel-devel yum install createrepo At this point we now have all the additional packages required to install the VirtualBox Guest Add-ons. So select Devices->InstallGuest Additions on you running guest: This will simply place the VirtualBoxGuestAdditions.iso in the virtual cd and we will need to execute the following before we can run them. mkdir /media/cdrom mount -t iso9660 -o ro /dev/cdrom /media/cdrom cd /media/cdrom/ ls ./VBoxLinuxAdditions.run This will initiate the install and kernel rebuild. What you will notice is that during the installation a Failed will be displayed but this is simply because we have no graphical components. At this point we the installation will also have added the vboxsf group to the system and to access any shared folders we will create our user will need to be a member of this group an so the next stage is to add the root user to this group as follows: usermod -G vboxsf root cat /etc/group cat /etc/passwd init 0 Now simply shutdown the guest and add the Shared folder within your guests settings. Install ModifyJeOS Once the shared folder has been added restart the guest and change directory into the shared folder (/media/sf_<folder name>). For the next step I am assuming the ModifyJeOS rpms are located in the shared folder. We can simply execute: rpm -ivh ovm-modify-jeos-1.1.0-17.el5.noarch.rpm # Test with modifyjeos Using ModifyJeOS I have a modified MountSystemImg.sh script that should be copied into the /root/bin directory (you may need to create this) and from here it can be executed from any location: MountSystemImg.sh #!/bin/sh # The script assumes it's being run from the directory containing the System.img # Export for later i.e. during unmount export LOOP=`losetup -f` export SYSTEMIMG=/mnt/elsystem export TEMPLATEDIR=`pwd` # Make Temp Mount Directory mkdir -p $SYSTEMIMG # Create Loop for the System Image losetup $LOOP System.img kpartx -a $LOOP mount /dev/mapper/`basename $LOOP`p2 $SYSTEMIMG #Change Dir into mounted Image cd $SYSTEMIMG echo "######################################################################" echo "### ###" echo "### Starting Bash shell for editing. When completed log out to ###" echo "### Unmount the System.img file. ###" echo "### ###" echo "######################################################################" echo bash cd ~ cd $TEMPLATEDIR umount $SYSTEMIMG kpartx -d $LOOP losetup -d $LOOP rm -rf $SYSTEMIMG This script will simple create a mount directory, mount the System.img and then start a new shell in the mounted directory. On exiting the shell it will unmount the System.img. It only requires that you execute the script in the directory containing the System.img. These can be created under the mounted shared directory. In the example below I have extracted the Base template within the shared folder and then renamed it OEL_40GB_ROOT before changing into that directory and executing the script.

    Read the article

  • Corsair Hackers Reboot

    It wasn't easy for me to attend but it was absolutely worth to go. The Linux User Group of Mauritius (LUGM) organised another get-together for any open source enthusiast here on the island. Strangely named "Corsair Hackers Reboot" but it stands for a positive cause: "Corsair Hackers Reboot Event A collaborative activity involving LUGM, UoM Computer Club, Fortune Way Shopping Mall and several geeks from around the island, striving to put FOSS into homes & offices. The public is invited to discover and explore Free Software & Open Source." And it was a good opportunity for me and the kids to visit the east coast of Mauritius, too. Perfect timing It couldn't have been better... Why? Well, for two important reasons (in terms of IT): End of support for Microsoft Windows XP - 08.04.2014 Release of Ubuntu 14.04 Long Term Support - 17.04.2014 Quite funnily, those two IT dates weren't the initial reasons and only during the weeks of preparations we put those together. And therefore it was even more positive to promote the use of Linux and open source software in general to a broader audience. Getting there ... Thanks to the new motor way M3 and all the additional road work which has been completed recently it was very simple to get across the island in a very quick and relaxed manner. Compared to my trips in the early days of living in Mauritius (and riding on a scooter) it was very smooth and within less than an hour we hit Centrale de Flacq. Well, being in the city doesn't necessarily mean that one has arrived at the destination. But thanks to modern technology I had a quick look on Google Maps, and we finally managed to get a parking behind the huge bus terminal in Flacq. From there it was just a short walk to Fortune Way. The children were trying to count the number of buses... Well, lots and lots of buses - really impressive actually. What was presented? There were different areas set up. Right at the entrance one's attention was directly drawn towards the elevated hacker's stage. Similar to rock stars performing their gig there was bunch of computers, laptops and networking equipment in order to cater the right working conditions for coding/programming challenge(s) on the one hand and for the pen-testing or system hacking competition on the other hand. Personally, I was very impresses that actually Nitin took care of the pen-testing competition. He hardly started one year back with Linux in general, and Kali Linux specifically. Seeing his personal development from absolute newbie to a decent Linux system administrator within such a short period of time, is really impressive. His passion to open source software made him a living. Next, clock-wise seen, was the Kid's Corner with face-painting as the main attraction. Additionally, there were numerous paper print outs to colour. Plus a decent workstation with the educational suite GCompris. Of course, my little ones were into that. They already know GCompris since a while as they are allowed to use it on an IGEL thin client terminal here at home. To simplify my life, I set up GCompris as full-screen guest session on the server, and they can pass the login screen without any further obstacles. And because it's a thin client hooked up to a XDMCP remote session I don't have to worry about the hardware on their desk, too. The next section was the main attraction of the event: BYOD - Bring Your Own Device Well, compared to the usual context of BYOD the corsairs had a completely different intention. Here, you could bring your own laptop and a team of knowledgeable experts - read: geeks and so on - offered to fully convert your system on any Linux distribution of your choice. And even though I came later, I was told that the USB pen drives had been in permanent use. From being prepared via dd command over launching LiveCD session to finally installing a fresh Linux system on bare metal. Most interestingly, I did a similar job already a couple of months ago, while upgrading an existing Windows XP system to Xubuntu 13.10. So far, the female owner is very happy and enjoys her system almost every evening to go shopping online, checking mails, and reading latest news from the Anime world. Back to the Hackers event, Ish told me that they managed approximately 20 conversion during the day. Furthermore, Ajay and others gladly assisted some visitors with some tricky issues and by the end of the day you can call is a success. While I was around, there was a elderly male visitor that got a full-fledged system conversion to a Linux system running completely in French language. A little bit more to the centre it was Yasir's turn to demonstrate his Arduino hardware that he hooked up with an experimental electrical circuit board connected to an LCD matrix display. That's the real spirit of hacking, and he showed some minor adjustments on the fly while demo'ing the system. Also, very interesting there was a thermal sensor around. Personally, I think that platforms like the Arduino as well as the Raspberry Pi have a great potential at a very affordable price in order to bring a better understanding of electronics as well as computer programming to a broader audience. It would be great to see more of those experiments during future activities. And last but not least there were a small number of vendors. Amongst them was Emtel - once again as sponsor of the general internet connectivity - and another hardware supplier from Riche Terre shopping mall. They had a good collection of Android related gimmicks, like a autonomous web cam that can convert any TV with HDMI connector into an online video chat system given WiFi. It's actually kind of awesome to have a Skype or Google hangout video session on the big screen rather than on the laptop. Some pictures of the event LUGM: Great conversations on Linux, open source and free software during the Corsair Hackers Reboot LUGM: Educational workstation running GCompris suite attracted the youngest attendees of the day. Of course, face painting had to be done prior to hacking... LUGM: Nadim demoing some Linux specifics to interested visitors. Everyone was pretty busy during the whole day LUGM: The hacking competition, here pen-testing a wireless connection and access point between multiple machines LUGM: Well prepared workstations to be able to 'upgrade' visitors' machines to any Linux operating system Final thoughts Gratefully, during the preparations of the event I was invited to leave some comments or suggestions, and the team of the LUGM did a great job. The outdoor banner was a eye-catcher, the various flyers and posters for the event were clearly written and as far as I understood from the quick chats I had with Ish, Nadim, Nitin, Ajay, and of course others all were very happy about the event execution. Great job, LUGM! And I'm already looking forward to the next Corsair Hackers Reboot event ... Crossing fingers: Very soon and hopefully this year again :) Update: In the media The event had been announced in local media, too. L'Express: Salon informatique: Hacking Challenge à Flacq

    Read the article

  • UV Atlas Generation and Seam Removal

    - by P. Avery
    I'm generating light maps for scene mesh objects using DirectX's UV Atlas Tool( D3DXUVAtlasCreate() ). I've succeeded in generating an atlas, however, when I try to render the mesh object using the atlas the seams are visible on the mesh. Below are images of a lightmap generated for a cube. Here is the code I use to generate a uv atlas for a cube: struct sVertexPosNormTex { D3DXVECTOR3 vPos, vNorm; D3DXVECTOR2 vUV; sVertexPosNormTex(){} sVertexPosNormTex( D3DXVECTOR3 v, D3DXVECTOR3 n, D3DXVECTOR2 uv ) { vPos = v; vNorm = n; vUV = uv; } ~sVertexPosNormTex() { } }; // create a light map texture to fill programatically hr = D3DXCreateTexture( pd3dDevice, 128, 128, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_MANAGED, &pLightmap ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXCreateTexture( lightmap )", __LINE__, hr ); return hr; } // get the zero level surface from the texture IDirect3DSurface9 *pS = NULL; pLightmap->GetSurfaceLevel( 0, &pS ); // clear surface pd3dDevice->ColorFill( pS, NULL, D3DCOLOR_XRGB( 0, 0, 0 ) ); // load a sample mesh DWORD dwcMaterials = 0; LPD3DXBUFFER pMaterialBuffer = NULL; V_RETURN( D3DXLoadMeshFromX( L"cube3.x", D3DXMESH_MANAGED, pd3dDevice, &pAdjacency, &pMaterialBuffer, NULL, &dwcMaterials, &g_pMesh ) ); // generate adjacency DWORD *pdwAdjacency = new DWORD[ 3 * g_pMesh->GetNumFaces() ]; g_pMesh->GenerateAdjacency( 1e-6f, pdwAdjacency ); // create light map coordinates LPD3DXMESH pMesh = NULL; LPD3DXBUFFER pFacePartitioning = NULL, pVertexRemapArray = NULL; FLOAT resultStretch = 0; UINT numCharts = 0; hr = D3DXUVAtlasCreate( g_pMesh, 0, 0, 128, 128, 3.5f, 0, pdwAdjacency, NULL, NULL, NULL, NULL, NULL, 0, &pMesh, &pFacePartitioning, &pVertexRemapArray, &resultStretch, &numCharts ); if( SUCCEEDED( hr ) ) { // release and set mesh SAFE_RELEASE( g_pMesh ); g_pMesh = pMesh; // write mesh to file hr = D3DXSaveMeshToX( L"cube4.x", g_pMesh, 0, ( const D3DXMATERIAL* )pMaterialBuffer->GetBufferPointer(), NULL, dwcMaterials, D3DXF_FILEFORMAT_TEXT ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to D3DXSaveMeshToX() at OnD3D9CreateDevice()", __LINE__, hr ); } // fill the the light map hr = BuildLightmap( pS, g_pMesh ); if( FAILED( hr ) ) { DebugStringDX( "Main", "Failed to BuildLightmap()", __LINE__, hr ); } } else { DebugStringDX( "Main", "Failed to D3DXUVAtlasCreate() at OnD3D9CreateDevice()", __LINE__, hr ); } SAFE_RELEASE( pS ); SAFE_DELETE_ARRAY( pdwAdjacency ); SAFE_RELEASE( pFacePartitioning ); SAFE_RELEASE( pVertexRemapArray ); SAFE_RELEASE( pMaterialBuffer ); Here is code to fill lightmap texture: HRESULT BuildLightmap( IDirect3DSurface9 *pS, LPD3DXMESH pMesh ) { HRESULT hr = S_OK; // validate lightmap texture surface and mesh if( !pS || !pMesh ) return E_POINTER; // lock the mesh vertex buffer sVertexPosNormTex *pV = NULL; pMesh->LockVertexBuffer( D3DLOCK_READONLY, ( void** )&pV ); // lock the mesh index buffer WORD *pI = NULL; pMesh->LockIndexBuffer( D3DLOCK_READONLY, ( void** )&pI ); // get the lightmap texture surface description D3DSURFACE_DESC desc; pS->GetDesc( &desc ); // lock the surface rect to fill with color data D3DLOCKED_RECT rct; hr = pS->LockRect( &rct, NULL, 0 ); if( FAILED( hr ) ) { DebugStringDX( "main.cpp:", "Failed to IDirect3DTexture9::LockRect()", __LINE__, hr ); return hr; } // iterate the pixels of the lightmap texture // check each pixel to see if it lies between the uv coordinates of a cube face BYTE *pBuffer = ( BYTE* )rct.pBits; for( UINT y = 0; y < desc.Height; ++y ) { BYTE* pBufferRow = ( BYTE* )pBuffer; for( UINT x = 0; x < desc.Width * 4; x+=4 ) { // determine the pixel's uv coordinate D3DXVECTOR2 p( ( ( float )x / 4.0f ) / ( float )desc.Width + 0.5f / 128.0f, y / ( float )desc.Height + 0.5f / 128.0f ); // for each face of the mesh // check to see if the pixel lies within the face's uv coordinates for( UINT i = 0; i < 3 * pMesh->GetNumFaces(); i +=3 ) { sVertexPosNormTex v[ 3 ]; v[ 0 ] = pV[ pI[ i + 0 ] ]; v[ 1 ] = pV[ pI[ i + 1 ] ]; v[ 2 ] = pV[ pI[ i + 2 ] ]; if( TexcoordIsWithinBounds( v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ) ) { // the pixel lies b/t the uv coordinates of a cube face // light contribution functions aren't needed yet //D3DXVECTOR3 vPos = TexcoordToPos( v[ 0 ].vPos, v[ 1 ].vPos, v[ 2 ].vPos, v[ 0 ].vUV, v[ 1 ].vUV, v[ 2 ].vUV, p ); //D3DXVECTOR3 vNormal = v[ 0 ].vNorm; // set the color of this pixel red( for demo ) BYTE ba[] = { 0, 0, 255, 255, }; //ComputeContribution( vPos, vNormal, g_sLight, ba ); // copy the byte array into the light map texture memcpy( ( void* )&pBufferRow[ x ], ( void* )ba, 4 * sizeof( BYTE ) ); } } } // go to next line of the texture pBuffer += rct.Pitch; } // unlock the surface rect pS->UnlockRect(); // unlock mesh vertex and index buffers pMesh->UnlockIndexBuffer(); pMesh->UnlockVertexBuffer(); // write the surface to file hr = D3DXSaveSurfaceToFile( L"LightMap.jpg", D3DXIFF_JPG, pS, NULL, NULL ); if( FAILED( hr ) ) DebugStringDX( "Main.cpp", "Failed to D3DXSaveSurfaceToFile()", __LINE__, hr ); return hr; } bool TexcoordIsWithinBounds( const D3DXVECTOR2 &t0, const D3DXVECTOR2 &t1, const D3DXVECTOR2 &t2, const D3DXVECTOR2 &p ) { // compute vectors D3DXVECTOR2 v0 = t1 - t0, v1 = t2 - t0, v2 = p - t0; float f00 = D3DXVec2Dot( &v0, &v0 ); float f01 = D3DXVec2Dot( &v0, &v1 ); float f02 = D3DXVec2Dot( &v0, &v2 ); float f11 = D3DXVec2Dot( &v1, &v1 ); float f12 = D3DXVec2Dot( &v1, &v2 ); // Compute barycentric coordinates float invDenom = 1 / ( f00 * f11 - f01 * f01 ); float fU = ( f11 * f02 - f01 * f12 ) * invDenom; float fV = ( f00 * f12 - f01 * f02 ) * invDenom; // Check if point is in triangle if( ( fU >= 0 ) && ( fV >= 0 ) && ( fU + fV < 1 ) ) return true; return false; } Screenshot Lightmap I believe the problem comes from the difference between the lightmap uv coordinates and the pixel center coordinates...for example, here are the lightmap uv coordinates( generated by D3DXUVAtlasCreate() ) for a specific face( tri ) within the mesh, keep in mind that I'm using the mesh uv coordinates to write the pixels for the texture: v[ 0 ].uv = D3DXVECTOR2( 0.003581, 0.295631 ); v[ 1 ].uv = D3DXVECTOR2( 0.003581, 0.003581 ); v[ 2 ].uv = D3DXVECTOR2( 0.295631, 0.003581 ); the lightmap texture size is 128 x 128 pixels. The upper-left pixel center coordinates are: float halfPixel = 0.5 / 128 = 0.00390625; D3DXVECTOR2 pixelCenter = D3DXVECTOR2( halfPixel, halfPixel ); will the mapping and sampling of the lightmap texture will require that an offset be taken into account or that the uv coordinates are snapped to the pixel centers..? ...Any ideas on the best way to approach this situation would be appreciated...What are the common practices?

    Read the article

  • How to implement smooth flocking

    - by Craig
    I'm working on a simple survival game, avoid the big guy and chase the the small guys to stay alive for as long as possible. I have taken the chase and evade example from MSDN create and drawn 20 mice on the screen. I want the small guys to flock when they arent evading. They are doing this, but it isnt as smooth as I would like it to be. How do i make the movement smoother? Its very jittery.# Below is what I have going at the moment, flocking code is within the IF statement, when it isnt set to evading. Any help would be greatly appreciated! :) namespace ChaseAndEvade { class MouseSprite { public enum MouseAiState { // evading the cat Evading, // the mouse can't see the "cat", and it's wandering around. Wander } // how fast can the mouse move? public float MaxMouseSpeed = 4.5f; // and how fast can it turn? public float MouseTurnSpeed = 0.20f; // MouseEvadeDistance controls the distance at which the mouse will flee from // cat. If the mouse is further than "MouseEvadeDistance" pixels away, he will // consider himself safe. public float MouseEvadeDistance = 100.0f; // this constant is similar to TankHysteresis. The value is larger than the // tank's hysteresis value because the mouse is faster than the tank: with a // higher velocity, small fluctuations are much more visible. public float MouseHysteresis = 60.0f; public Texture2D mouseTexture; public Vector2 mouseTextureCenter; public Vector2 mousePosition; public MouseAiState mouseState = MouseAiState.Wander; public float mouseOrientation; public Vector2 mouseWanderDirection; int separationImpact = 4; int cohesionImpact = 6; int alignmentImpact = 2; int sensorDistance = 50; public void UpdateMouse(Vector2 position, MouseSprite [] mice, int numberMice, int index) { Vector2 catPosition = position; int enemies = numberMice; // first, calculate how far away the mouse is from the cat, and use that // information to decide how to behave. If they are too close, the mouse // will switch to "active" mode - fleeing. if they are far apart, the mouse // will switch to "idle" mode, where it roams around the screen. // we use a hysteresis constant in the decision making process, as described // in the accompanying doc file. float distanceFromCat = Vector2.Distance(mousePosition, catPosition); // the cat is a safe distance away, so the mouse should idle: if (distanceFromCat > MouseEvadeDistance + MouseHysteresis) { mouseState = MouseAiState.Wander; } // the cat is too close; the mouse should run: else if (distanceFromCat < MouseEvadeDistance - MouseHysteresis) { mouseState = MouseAiState.Evading; } // if neither of those if blocks hit, we are in the "hysteresis" range, // and the mouse will continue doing whatever it is doing now. // the mouse will move at a different speed depending on what state it // is in. when idle it won't move at full speed, but when actively evading // it will move as fast as it can. this variable is used to track which // speed the mouse should be moving. float currentMouseSpeed; // the second step of the Update is to change the mouse's orientation based // on its current state. if (mouseState == MouseAiState.Evading) { // If the mouse is "active," it is trying to evade the cat. The evasion // behavior is accomplished by using the TurnToFace function to turn // towards a point on a straight line facing away from the cat. In other // words, if the cat is point A, and the mouse is point B, the "seek // point" is C. // C // B // A Vector2 seekPosition = 2 * mousePosition - catPosition; // Use the TurnToFace function, which we introduced in the AI Series 1: // Aiming sample, to turn the mouse towards the seekPosition. Now when // the mouse moves forward, it'll be trying to move in a straight line // away from the cat. mouseOrientation = ChaseAndEvadeGame.TurnToFace(mousePosition, seekPosition, mouseOrientation, MouseTurnSpeed); // set currentMouseSpeed to MaxMouseSpeed - the mouse should run as fast // as it can. currentMouseSpeed = MaxMouseSpeed; } else { // if the mouse isn't trying to evade the cat, it should just meander // around the screen. we'll use the Wander function, which the mouse and // tank share, to accomplish this. mouseWanderDirection and // mouseOrientation are passed by ref so that the wander function can // modify them. for more information on ref parameters, see // http://msdn2.microsoft.com/en-us/library/14akc2c7(VS.80).aspx ChaseAndEvadeGame.Wander(mousePosition, ref mouseWanderDirection, ref mouseOrientation, MouseTurnSpeed); // if the mouse is wandering, it should only move at 25% of its maximum // speed. currentMouseSpeed = .25f * MaxMouseSpeed; Vector2 separate = Vector2.Zero; Vector2 moveCloser = Vector2.Zero; Vector2 moveAligned = Vector2.Zero; // What the AI does when it sees other AIs for (int j = 0; j < enemies; j++) { if (index != j) { // Calculate a vector towards another AI Vector2 separation = mice[index].mousePosition - mice[j].mousePosition; // Only react if other AI is within a certain distance if ((separation.Length() < this.sensorDistance) & (separation.Length()> 0) ) { moveAligned += mice[j].mouseWanderDirection; float distance = Math.Abs(separation.Length()); if (distance == 0) distance = 1; moveCloser += mice[j].mousePosition; separation.Normalize(); separate += separation / distance; } } } if (moveAligned.LengthSquared() != 0) { moveAligned.Normalize(); } if (moveCloser.LengthSquared() != 0) { moveCloser.Normalize(); } moveCloser /= enemies; mice[index].mousePosition += (separate * separationImpact) + (moveCloser * cohesionImpact) + (moveAligned * alignmentImpact); } // The final step is to move the mouse forward based on its current // orientation. First, we construct a "heading" vector from the orientation // angle. To do this, we'll use Cosine and Sine to tell us the x and y // components of the heading vector. See the accompanying doc for more // information. Vector2 heading = new Vector2( (float)Math.Cos(mouseOrientation), (float)Math.Sin(mouseOrientation)); // by multiplying the heading and speed, we can get a velocity vector. the // velocity vector is then added to the mouse's current position, moving him // forward. mousePosition += heading * currentMouseSpeed; } } }

    Read the article

  • Augmenting your Social Efforts via Data as a Service (DaaS)

    - by Mike Stiles
    The following is the 3rd in a series of posts on the value of leveraging social data across your enterprise by Oracle VP Product Development Don Springer and Oracle Cloud Data and Insight Service Sr. Director Product Management Niraj Deo. In this post, we will discuss the approach and value of integrating additional “public” data via a cloud-based Data-as-as-Service platform (or DaaS) to augment your Socially Enabled Big Data Analytics and CX Management. Let’s assume you have a functional Social-CRM platform in place. You are now successfully and continuously listening and learning from your customers and key constituents in Social Media, you are identifying relevant posts and following up with direct engagement where warranted (both 1:1, 1:community, 1:all), and you are starting to integrate signals for communication into your appropriate Customer Experience (CX) Management systems as well as insights for analysis in your business intelligence application. What is the next step? Augmenting Social Data with other Public Data for More Advanced Analytics When we say advanced analytics, we are talking about understanding causality and correlation from a wide variety, volume and velocity of data to Key Performance Indicators (KPI) to achieve and optimize business value. And in some cases, to predict future performance to make appropriate course corrections and change the outcome to your advantage while you can. The data to acquire, process and analyze this is very nuanced: It can vary across structured, semi-structured, and unstructured data It can span across content, profile, and communities of profiles data It is increasingly public, curated and user generated The key is not just getting the data, but making it value-added data and using it to help discover the insights to connect to and improve your KPIs. As we spend time working with our larger customers on advanced analytics, we have seen a need arise for more business applications to have the ability to ingest and use “quality” curated, social, transactional reference data and corresponding insights. The challenge for the enterprise has been getting this data inline into an easily accessible system and providing the contextual integration of the underlying data enriched with insights to be exported into the enterprise’s business applications. The following diagram shows the requirements for this next generation data and insights service or (DaaS): Some quick points on these requirements: Public Data, which in this context is about Common Business Entities, such as - Customers, Suppliers, Partners, Competitors (all are organizations) Contacts, Consumers, Employees (all are people) Products, Brands This data can be broadly categorized incrementally as - Base Utility data (address, industry classification) Public Master Reference data (trade style, hierarchy) Social/Web data (News, Feeds, Graph) Transactional Data generated by enterprise process, workflows etc. This Data has traits of high-volume, variety, velocity etc., and the technology needed to efficiently integrate this data for your needs includes - Change management of Public Reference Data across all categories Applied Big Data to extract statics as well as real-time insights Knowledge Diagnostics and Data Mining As you consider how to deploy this solution, many of our customers will be using an online “cloud” service that provides quality data and insights uniformly to all their necessary applications. In addition, they are requesting a service that is: Agile and Easy to Use: Applications integrated with the service can obtain data on-demand, quickly and simply Cost-effective: Pre-integrated into applications so customers don’t have to Has High Data Quality: Single point access to reference data for data quality and linkages to transactional, curated and social data Supports Data Governance: Becomes more manageable and cost-effective since control of data privacy and compliance can be enforced in a centralized place Data-as-a-Service (DaaS) Just as the cloud has transformed and now offers a better path for how an enterprise manages its IT from their infrastructure, platform, and software (IaaS, PaaS, and SaaS), the next step is data (DaaS). Over the last 3 years, we have seen the market begin to offer a cloud-based data service and gain initial traction. On one side of the DaaS continuum, we see an “appliance” type of service that provides a single, reliable source of accurate business data plus social information about accounts, leads, contacts, etc. On the other side of the continuum we see more of an online market “exchange” approach where ISVs and Data Publishers can publish and sell premium datasets within the exchange, with the exchange providing a rich set of web interfaces to improve the ease of data integration. Why the difference? It depends on the provider’s philosophy on how fast the rate of commoditization of certain data types will occur. How do you decide the best approach? Our perspective, as shown in the diagram below, is that the enterprise should develop an elastic schema to support multi-domain applicability. This allows the enterprise to take the most flexible approach to harness the speed and breadth of public data to achieve value. The key tenet of the proposed approach is that an enterprise carefully federates common utility, master reference data end points, mobility considerations and content processing, so that they are pervasively available. One way you may already be familiar with this approach is in how you do Address Verification treatments for accounts, contacts etc. If you design and revise this service in such a way that it is also easily available to social analytic needs, you could extend this to launch geo-location based social use cases (marketing, sales etc.). Our fundamental belief is that value-added data achieved through enrichment with specialized algorithms, as well as applying business “know-how” to weight-factor KPIs based on innovative combinations across an ever-increasing variety, volume and velocity of data, will be where real value is achieved. Essentially, Data-as-a-Service becomes a single entry point for the ever-increasing richness and volume of public data, with enrichment and combined capabilities to extract and integrate the right data from the right sources with the right factoring at the right time for faster decision-making and action within your core business applications. As more data becomes available (and in many cases commoditized), this value-added data processing approach will provide you with ongoing competitive advantage. Let’s look at a quick example of creating a master reference relationship that could be used as an input for a variety of your already existing business applications. In phase 1, a simple master relationship is achieved between a company (e.g. General Motors) and a variety of car brands’ social insights. The reference data allows for easy sort, export and integration into a set of CRM use cases for analytics, sales and marketing CRM. In phase 2, as you create more data relationships (e.g. competitors, contacts, other brands) to have broader and deeper references (social profiles, social meta-data) for more use cases across CRM, HCM, SRM, etc. This is just the tip of the iceberg, as the amount of master reference relationships is constrained only by your imagination and the availability of quality curated data you have to work with. DaaS is just now emerging onto the marketplace as the next step in cloud transformation. For some of you, this may be the first you have heard about it. Let us know if you have questions, or perspectives. In the meantime, we will continue to share insights as we can.Photo: Erik Araujo, stock.xchng

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Regarding playing media file in Android media player application

    - by Mangesh
    Hi. I am new to android development. I just started with creating my own media player application by looking at the code samples given in Android SDK. While I am trying to play a local media file (m.3gp), I am getting IOException error :: error(1,-4). Please can somebody help me in this regard. Here is my code. package com.mediaPlayer; import java.io.IOException; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.media.MediaPlayer; import android.media.MediaPlayer.OnBufferingUpdateListener; import android.media.MediaPlayer.OnCompletionListener; import android.media.MediaPlayer.OnPreparedListener; import android.media.MediaPlayer.OnVideoSizeChangedListener; import android.view.SurfaceHolder; import android.util.Log; public class MediaPlayer1 extends Activity implements OnBufferingUpdateListener, OnCompletionListener,OnPreparedListener, OnVideoSizeChangedListener,SurfaceHolder.Callback { private static final String TAG = "MediaPlayerByMangesh"; // Widgets in the application private Button btnPlay; private Button btnPause; private Button btnStop; private MediaPlayer mMediaPlayer; private String path = "m.3gp"; private SurfaceHolder holder; private int mVideoWidth; private int mVideoHeight; private boolean mIsVideoSizeKnown = false; private boolean mIsVideoReadyToBePlayed = false; // For the id of radio button selected private int radioCheckedId = -1; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { Log.d(TAG, "Entered OnCreate:"); super.onCreate(savedInstanceState); setContentView(R.layout.main); Log.d(TAG, "Creatinging Buttons:"); btnPlay = (Button) findViewById(R.id.btnPlay); btnPause = (Button) findViewById(R.id.btnPause); // On app load, the Pause button is disabled btnPause.setEnabled(false); btnStop = (Button) findViewById(R.id.btnStop); btnStop.setEnabled(false); /* * Attach a OnCheckedChangeListener to the radio group to monitor radio * buttons selected by user */ Log.d(TAG, "Watching for Click"); /* Attach listener to the Calculate and Reset buttons */ btnPlay.setOnClickListener(mClickListener); btnPause.setOnClickListener(mClickListener); btnStop.setOnClickListener(mClickListener); } /* * ClickListener for the Calculate and Reset buttons. Depending on the * button clicked, the corresponding method is called. */ private OnClickListener mClickListener = new OnClickListener() { @Override public void onClick(View v) { switch (v.getId()) { case R.id.btnPlay: Log.d(TAG, "Clicked Play Button"); Log.d(TAG, "Calling Play Function"); Play(); break; case R.id.btnPause: Pause(); break; case R.id.btnStop: Stop(); break; } } }; /** * Play the Video. */ private void Play() { // Create a new media player and set the listeners mMediaPlayer = new MediaPlayer(); Log.d(TAG, "Entered Play function:"); try { mMediaPlayer.setDataSource(path); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setDisplay(holder); try { mMediaPlayer.prepare(); } catch(IOException ie) { Log.d(TAG, "IO Exception:" + path); } mMediaPlayer.setOnBufferingUpdateListener(this); mMediaPlayer.setOnCompletionListener(this); mMediaPlayer.setOnPreparedListener(this); //mMediaPlayer.setOnVideoSizeChangedListener(this); //mMediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC); } public void onBufferingUpdate(MediaPlayer arg0, int percent) { Log.d(TAG, "onBufferingUpdate percent:" + percent); } public void onCompletion(MediaPlayer arg0) { Log.d(TAG, "onCompletion called"); } public void onVideoSizeChanged(MediaPlayer mp, int width, int height) { Log.v(TAG, "onVideoSizeChanged called"); if (width == 0 || height == 0) { Log.e(TAG, "invalid video width(" + width + ") or height(" + height + ")"); return; } mIsVideoSizeKnown = true; mVideoWidth = width; mVideoHeight = height; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void onPrepared(MediaPlayer mediaplayer) { Log.d(TAG, "onPrepared called"); mIsVideoReadyToBePlayed = true; if (mIsVideoReadyToBePlayed && mIsVideoSizeKnown) { startVideoPlayback(); } } public void surfaceChanged(SurfaceHolder surfaceholder, int i, int j, int k) { Log.d(TAG, "surfaceChanged called"); } public void surfaceDestroyed(SurfaceHolder surfaceholder) { Log.d(TAG, "surfaceDestroyed called"); } public void surfaceCreated(SurfaceHolder holder) { Log.d(TAG, "surfaceCreated called"); Play(); } private void startVideoPlayback() { Log.v(TAG, "startVideoPlayback"); holder.setFixedSize(176, 144); mMediaPlayer.start(); } /** * Pause the Video */ private void Pause() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Stop the Video. */ private void Stop() { ; /* * If all fields are populated with valid values, then proceed to * calculate the tips */ } /** * Shows the error message in an alert dialog * * @param errorMessage * String the error message to show * @param fieldId * the Id of the field which caused the error. This is required * so that the focus can be set on that field once the dialog is * dismissed. */ private void showErrorAlert(String errorMessage, final int fieldId) { new AlertDialog.Builder(this).setTitle("Error") .setMessage(errorMessage).setNeutralButton("Close", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { findViewById(fieldId).requestFocus(); } }).show(); } } Thanks, Mangesh Kumar K.

    Read the article

  • Some problems with GridView in webpart with multiple filters.

    - by NF_81
    Hello, I'm currently working on a highly configurable Database Viewer webpart for WSS 3.0 which we are going to need for several customized sharepoint sites. Sorry in advance for the large wall of text, but i fear it's necessary to recap the whole issue. As background information and to describe my problem as good as possible, I'll start by telling you what the webpart shall do: Basically the webpart contains an UpdatePanel, which contains a GridView and an SqlDataSource. The select-query the Datasource uses can be set via webbrowseable properties or received from a consumer method from another webpart. Now i wanted to add a filtering feature to the webpart, so i want a dropdownlist in the headerrow for each column that should be filterable. As the select-query is completely dynamic and i don't know at design time which columns shall be filterable, i decided to add a webbrowseable property to contain an xml-formed string with filter information. So i added the following into OnRowCreated of the gridview: void gridView_RowCreated(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.Header) { for (int i = 0; i < e.Row.Cells.Count; i++) { if (e.Row.Cells[i].GetType() == typeof(DataControlFieldHeaderCell)) { string headerText = ((DataControlFieldHeaderCell)e.Row.Cells[i]).ContainingField.HeaderText; // add sorting functionality if (_allowSorting && !String.IsNullOrEmpty(headerText)) { Label l = new Label(); l.Text = headerText; l.ForeColor = Color.Blue; l.Font.Bold = true; l.ID = "Header" + i; l.Attributes["title"] = "Sort by " + headerText; l.Attributes["onmouseover"] = "this.style.cursor = 'pointer'; this.style.color = 'red'"; l.Attributes["onmouseout"] = "this.style.color = 'blue'"; l.Attributes["onclick"] = "__doPostBack('" + panel.UniqueID + "','SortBy$" + headerText + "');"; e.Row.Cells[i].Controls.Add(l); } // check if this column shall be filterable if (!String.IsNullOrEmpty(filterXmlData)) { XmlNode columnNode = GetColumnNode(headerText); if (columnNode != null) { string dataValueField = columnNode.Attributes["DataValueField"] == null ? "" : columnNode.Attributes["DataValueField"].Value; string filterQuery = columnNode.Attributes["FilterQuery"] == null ? "" : columnNode.Attributes["FilterQuery"].Value; if (!String.IsNullOrEmpty(dataValueField) && !String.IsNullOrEmpty(filterQuery)) { SqlDataSource ds = new SqlDataSource(_conStr, filterQuery); DropDownList cbx = new DropDownList(); cbx.ID = "FilterCbx" + i; cbx.Attributes["onchange"] = "__doPostBack('" + panel.UniqueID + "','SelectionChange$" + headerText + "$' + this.options[this.selectedIndex].value);"; cbx.Width = 150; cbx.DataValueField = dataValueField; cbx.DataSource = ds; cbx.DataBound += new EventHandler(cbx_DataBound); cbx.PreRender += new EventHandler(cbx_PreRender); cbx.DataBind(); e.Row.Cells[i].Controls.Add(cbx); } } } } } } } GetColumnNode() checks in the filter property, if there is a node for the current column, which contains information about the Field the DropDownList should bind to, and the query for filling in the items. In cbx_PreRender() i check ViewState and select an item in case of a postback. In cbx_DataBound() i just add tooltips to the list items as the dropdownlist has a fixed width. Previously, I used AutoPostback and SelectedIndexChanged of the DDL to filter the grid, but to my disappointment it was not always fired. Now i check __EVENTTARGET and __EVENTARGUMENT in OnLoad and call a function when the postback event was due to a selection change in a DDL: private void FilterSelectionChanged(string columnName, string selectedValue) { columnName = "[" + columnName + "]"; if (selectedValue.IndexOf("--") < 0 ) // "-- All --" selected { if (filter.ContainsKey(columnName)) filter[columnName] = "='" + selectedValue + "'"; else filter.Add(columnName, "='" + selectedValue + "'"); } else { filter.Remove(columnName); } gridView.PageIndex = 0; } "filter" is a HashTable which is stored in ViewState for persisting the filters (got this sample somewhere on the web, don't remember where). In OnPreRender of the webpart, i call a function which reads the ViewState and apply the filterExpression to the datasource if there is one. I assume i had to place it here, because if there is another postback (e.g. for sorting) the filters are not applied any more. private void ApplyGridFilter() { string args = " "; int i = 0; foreach (object key in filter.Keys) { if (i == 0) args = key.ToString() + filter[key].ToString(); else args += " AND " + key.ToString() + filter[key].ToString(); i++; } dataSource.FilterExpression = args; ViewState.Add("FilterArgs", filter); } protected override void OnPreRender(EventArgs e) { EnsureChildControls(); if (WebPartManager.DisplayMode.Name == "Edit") { errMsg = "Webpart in Edit mode..."; return; } if (useWebPartConnection == true) // get select-query from consumer webpart { if (provider != null) { dataSource.SelectCommand = provider.strQuery; } } try { int currentPageIndex = gridView.PageIndex; if (!String.IsNullOrEmpty(m_SortExpression)) { gridView.Sort("[" + m_SortExpression + "]", m_SortDirection); } gridView.PageIndex = currentPageIndex; // for some reason, the current pageindex resets after sorting ApplyGridFilter(); gridView.DataBind(); } catch (Exception ex) { Functions.ShowJavaScriptAlert(Page, ex.Message); } base.OnPreRender(e); } So i set the filterExpression and the call DataBind(). I don't know if this is ok on this late stage.. don't have a lot of asp.net experience after all. If anyone can suggest a better solution, please give me a hint. This all works great so far, except when i have two or more filters and set them to a combination that returns zero records. Bam ... gridview gone, completely - without a possiblity of changing the filters back. So i googled and found out that i have to subclass gridview in order to always show the headerrow. I found this solution and implemented it with some modifications. The headerrow get's displayed and i can change the filters even if the returned result contains no rows. But finally to my current problem: When i have two or more filters set which return zero rows, and i change back one filter to something that should return rows, the gridview remains empty (although the pager is rendered). I have to completly refresh the page to reset the filters. When debugging, i can see in the overridden CreateChildControls of the grid, that the base method indeed returns 0, but anyway... the gridView.RowCount remains 0 after databinding. Anyone have an idea what's going wrong here?

    Read the article

  • ConfigurationManager.AppSettings is empty?

    - by Mattousai
    Hello All, I have a VS2008 ASP.NET Web Service Application running on the local IIS of my XP machine. A separate project in the same solution uses test methods to invoke the WS calls, and run their processes. When I added a web reference to the WS App, VS2008 created a Settings.settings file in the Properties folder to store the address of the web reference. This process also created a new section in the Web.config file called applicationSettings to store the values from Settings.settings When my application attempts to retrieve configuration values from the appSettings section of the Web.config file, via ConfigurationManager.AppSettings[key], all values are null and AppSettings.AllKeys.Length is always zero. I even reverted the Web.config file to before the web reference was added, and made sure it was exactly the same as a system-generated web.config file for a new project that works fine. After comparing the reverted Web.config and a new Web.config, I addded one simple value in the appSettings section, and still no luck with ConfigurationManager.AppSettings[key]. Here is the reverted Web.config that cannot be read from <?xml version="1.0"?> <configuration> <configSections> <sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication"/> <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="Everywhere" /> <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" allowDefinition="MachineToApplication" /> </sectionGroup> </sectionGroup> </sectionGroup> </configSections> <appSettings> <add key="testkey" value="testvalue"/> </appSettings> <connectionStrings/> <system.web> <!-- Set compilation debug="true" to insert debugging symbols into the compiled page. Because this affects performance, set this value to true only during development. --> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Data.DataSetExtensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Xml.Linq, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> </assemblies> </compilation> <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Windows" /> <!-- The <customErrors> section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. <customErrors mode="RemoteOnly" defaultRedirect="GenericErrorPage.htm"> <error statusCode="403" redirect="NoAccess.htm" /> <error statusCode="404" redirect="FileNotFound.htm" /> </customErrors> --> <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls> </pages> <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> </httpHandlers> <httpModules> <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </httpModules> </system.web> <system.codedom> <compilers> <compiler language="c#;cs;csharp" extension=".cs" warningLevel="4" type="Microsoft.CSharp.CSharpCodeProvider, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <providerOption name="CompilerVersion" value="v3.5"/> <providerOption name="WarnAsError" value="false"/> </compiler> </compilers> </system.codedom> <!-- The system.webServer section is required for running ASP.NET AJAX under Internet Information Services 7.0. It is not necessary for previous version of IIS. --> <system.webServer> <validation validateIntegratedModeConfiguration="false"/> <modules> <remove name="ScriptModule" /> <add name="ScriptModule" preCondition="managedHandler" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </modules> <handlers> <remove name="WebServiceHandlerFactory-Integrated"/> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add name="ScriptResource" preCondition="integratedMode" verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> </system.webServer> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> <dependentAssembly> <assemblyIdentity name="System.Web.Extensions.Design" publicKeyToken="31bf3856ad364e35"/> <bindingRedirect oldVersion="1.0.0.0-1.1.0.0" newVersion="3.5.0.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> Has anyone experienced this, or know how to solve the problem? TIA -Matt

    Read the article

  • how to update database table in sqlite3 iphone

    - by Ajeet Kumar Yadav
    Hi I am new in Iphone i am developing application that is use database but i am face problem. In My application I am using table view. I am fetch the value from database in the table view this is done after that we also insert value throw text field in that table of database and display that value in table view.I am also use another table in database table name is Alootikki the value of table alootikki is display in other table view in application and user want to add this table alootikki value in first table of database and display that value in table view when we do this value is display only in the table view this is not write in the database table. when value is display and user want to add other data throw text field then only that add value is show first value is remove from table view. I am not able to solve this plz help me. The code is given bellow for database -(void)Data1 { //////databaseName = @"dataa.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ////////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist(Incredients ) Values(?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } sqlite3_bind_text(addStmt, 1, [i UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } -(void)sopinglist { databaseName= @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ///////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist(Incredients,Recipename,foodtype ) Values(?,?,?)"; ///////////// const char *sql =" Update Slist ( Incredients, Recipename,foodtype) Values(?,?,?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } /////for( NSString * j in k) sqlite3_bind_text(addStmt, 1, [k UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } -(void)Data { ////////////////databaseName = @"dataa.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(detailStmt == nil) { const char *sql = "Select * from Slist "; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) == SQLITE_OK) { //NSLog(@"Hiiiiiii"); //sqlite3_bind_text(detailStmt, 1, [t1 UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_text(detailStmt, 2, [t2 UTF8String], -2, SQLITE_TRANSIENT); //sqlite3_bind_int(detailStmt, 3, t3); while(sqlite3_step(detailStmt) == SQLITE_ROW) { NSLog(@"Helllloooooo"); //NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; char *str=( char*)sqlite3_column_text(detailStmt, 0); if( str) { item = [ NSString stringWithUTF8String:str ]; } else { item= @""; } //+ (NSString*)stringWithCharsIfNotNull: (char*)item /// { // if ( item == NULL ) // return nil; //else // return [[NSString stringWithUTF8String: item] //stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; //} //NSString *fame= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; //NSString *cinemax = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 2)]; //NSString *big= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 3)]; //pvr1 = pvr; item1=item; //NSLog(@"%@",item1); data = [[NSMutableArray alloc] init]; list *animal=[[list alloc] initWithName:item1]; // Add the animal object to the animals Array [list1 addObject:animal]; //[list1 addObject:item]; } sqlite3_reset(detailStmt); } sqlite3_finalize(detailStmt); // sqlite3_clear_bindings(detailStmt); } } detailStmt = nil; sqlite3_close(database); } (void)recpies { /////////////////////databaseName = @"Data.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(detailStmt == nil) { //////const char *sql = "Select * from Dataa"; const char *sql ="select *from alootikki"; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) == SQLITE_OK) { //NSLog(@"Hiiiiiii"); //sqlite3_bind_text(detailStmt, 1, [t1 UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_text(detailStmt, 2, [t2 UTF8String], -2, SQLITE_TRANSIENT); //sqlite3_bind_int(detailStmt, 3, t3); while(sqlite3_step(detailStmt) == SQLITE_ROW) { //NSLog(@"Helllloooooo"); //NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; char *str=( char*)sqlite3_column_text(detailStmt, 0); if( str) { item = [ NSString stringWithUTF8String:str ]; } else { item= @""; } //NSString *fame= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; //NSString *cinemax = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 2)]; //NSString *big= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 3)]; //pvr1 = pvr; item1=item; //NSLog(@"%@",item1); data = [[NSMutableArray alloc] init]; list *animal=[[list alloc] initWithName:item1]; // Add the animal object to the animals Array [list1 addObject:animal]; //[list1 addObject:item]; } sqlite3_reset(detailStmt); } sqlite3_finalize(detailStmt); // sqlite3_clear_bindings(detailStmt); } } detailStmt = nil; sqlite3_close(database); }

    Read the article

  • Convert flyout menu to respond onclick vs mouseover

    - by Scott B
    The code below creates a nifty flyout menu action on a nested list item sequence. The client has called and wants the change the default behavior in which the flyouts are triggered by mouseover, so that you have to click to trigger a flyout. Ideally, I would just like to modify this code so that you click on a small icon (plus/minus) that sits to the right of the menu item if it has child menus. Can someone give me a bit of guidance on what bits I'd need to change to accomplish this? /* a few sniffs to circumvent known browser bugs */ var sUserAgent = navigator.userAgent.toLowerCase(); var isIE=document.all?true:false; var isNS4=document.layers?true:false; var isOp=(sUserAgent.indexOf('opera')!=-1)?true:false; var isMac=(sUserAgent.indexOf('mac')!=-1)?true:false; var isMoz=(sUserAgent.indexOf('mozilla/5')!=-1&&sUserAgent.indexOf('opera')==-1&&sUserAgent.indexOf('msie')==-1)?true:false; var isNS6=(sUserAgent.indexOf('netscape6')!=-1&&sUserAgent.indexOf('opera')==-1&&sUserAgent.indexOf('msie')==-1)?true:false; var dom=document.getElementById?true:false; /* sets time until menus disappear in milliseconds */ var iMenuTimeout=1500; var aMenus=new Array; var oMenuTimeout; var iMainMenusLength=0; /* the following boolean controls the z-index property if needed */ /* if is only necessary if you have multiple mainMenus in one file that are overlapping */ /* set bSetZIndeces to true (either here or in the HTML) and the main menus will have a z-index set in descending order so that preceding ones can overlap */ /* the integer iStartZIndexAt controls z-index of the first main menu */ var bSetZIndeces=true; var iStartZIndexAt=1000; var aMainMenus=new Array; /* load up the submenus */ function loadMenus(){ if(!dom)return; var aLists=document.getElementsByTagName('ul'); for(var i=0;i<aLists.length;i++){ if(aLists[i].className=='navMenu')aMenus[aMenus.length]=aLists[i]; } var aAnchors=document.getElementsByTagName('a'); var aItems = new Array; for(var i=0;i<aAnchors.length;i++){ // if(aAnchors[i].className=='navItem')aItems[aItems.length] = aAnchors[i]; aItems[aItems.length] = aAnchors[i]; } var sMenuId=null; var oParentMenu=null; var aAllElements=document.body.getElementsByTagName("*"); if(isIE)aAllElements=document.body.all; /* loop through navItem and navMenus and dynamically assign their IDs */ /* each relies on it's parent's ID being set before it */ for(var i=0;i<aAllElements.length;i++){ if(aAllElements[i].className.indexOf('x8menus')!=-1){ /* load up main menus collection */ if(bSetZIndeces)aMainMenus[aMainMenus.length]=aAllElements[i]; } // if(aAllElements[i].className=='navItem'){ if(aAllElements[i].tagName=='A'){ oParentMenu = aAllElements[i].parentNode.parentNode; if(!oParentMenu.childMenus) oParentMenu.childMenus = new Array; oParentMenu.childMenus[oParentMenu.childMenus.length]=aAllElements[i]; if(aAllElements[i].id==''){ if(oParentMenu.className=='x8menus'){ aAllElements[i].id='navItem_'+iMainMenusLength; //alert(aAllElements[i].id); iMainMenusLength++; }else{ aAllElements[i].id=oParentMenu.id.replace('Menu','Item')+'.'+oParentMenu.childMenus.length; } } } else if(aAllElements[i].className=='navMenu'){ oParentItem = aAllElements[i].parentNode.firstChild; aAllElements[i].id = oParentItem.id.replace('Item','Menu'); } } /* dynamically set z-indeces of main menus so they won't underlap */ for(var i=aMainMenus.length-1;i>=0;i--){ aMainMenus[i].style.zIndex=iStartZIndexAt-i; } /* set menu item properties */ for(var i=0;i<aItems.length;i++){ sMenuId=aItems[i].id; sMenuId='navMenu_'+sMenuId.substring(8,sMenuId.lastIndexOf('.')); /* assign event handlers */ /* eval() used here to avoid syntax errors for function literals in Netscape 3 */ eval('aItems[i].onmouseover=function(){modClass(true,this,"activeItem");window.clearTimeout(oMenuTimeout);showMenu("'+sMenuId+'");};'); eval('aItems[i].onmouseout=function(){modClass(false,this,"activeItem");window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout("hideMenu(\'all\')",iMenuTimeout);}'); eval('aItems[i].onfocus=function(){this.onmouseover();}'); eval('aItems[i].onblur=function(){this.onmouseout();}'); //aItems[i].addEventListener("keydown",function(){keyNav(this,event);},false); } var sCatId=0; var oItem; for(var i=0;i<aMenus.length;i++){ /* assign event handlers */ /* eval() used here to avoid syntax errors for function literals in Netscape 3 */ eval('aMenus[i].onmouseover=function(){window.clearTimeout(oMenuTimeout);}'); eval('aMenus[i].onmouseout=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout("hideMenu(\'all\')",iMenuTimeout);}'); sCatId=aMenus[i].id; sCatId=sCatId.substring(8,sCatId.length); oItem=document.getElementById('navItem_'+sCatId); if(oItem){ if(!isOp && !(isMac && isIE) && oItem.parentNode)modClass(true,oItem.parentNode,"hasSubMenu"); else modClass(true,oItem,"hasSubMenu"); /* assign event handlers */ eval('oItem.onmouseover=function(){window.clearTimeout(oMenuTimeout);showMenu("navMenu_'+sCatId+'");}'); eval('oItem.onmouseout=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout(\'hideMenu("navMenu_'+sCatId+'")\',iMenuTimeout);}'); eval('oItem.onfocus=function(){window.clearTimeout(oMenuTimeout);showMenu("navMenu_'+sCatId+'");}'); eval('oItem.onblur=function(){window.clearTimeout(oMenuTimeout);oMenuTimeout=window.clearTimeout(oMenuTimeout);oMenuTimeout=window.setTimeout(\'hideMenu("navMenu_'+sCatId+'")\',iMenuTimeout);}'); //oItem.addEventListener("keydown",function(){keyNav(this,event);},false); } } } /* this will append the loadMenus function to any previously assigned window.onload event */ /* if you reassign this onload event, you'll need to include this or execute it after all the menus are loaded */ function newOnload(){ if(typeof previousOnload=='function')previousOnload(); loadMenus(); } var previousOnload; if(window.onload!=null)previousOnload=window.onload; window.onload=newOnload; /* show menu and hide all others except ancestors of the current menu */ function showMenu(sWhich){ var oWhich=document.getElementById(sWhich); if(!oWhich){ hideMenu('all'); return; } var aRootMenus=new Array; aRootMenus[0]=sWhich var sCurrentRoot=sWhich; var bHasParentMenu=false; if(sCurrentRoot.indexOf('.')!=-1){ bHasParentMenu=true; } /* make array of this menu and ancestors so we know which to leave exposed */ /* ex. from ID string "navMenu_12.3.7.4", extracts menu levels ["12.3.7.4", "12.3.7", "12.3", "12"] */ while(bHasParentMenu){ if(sCurrentRoot.indexOf('.')==-1)bHasParentMenu=false; aRootMenus[aRootMenus.length]=sCurrentRoot; sCurrentRoot=sCurrentRoot.substring(0,sCurrentRoot.lastIndexOf('.')); } for(var i=0;i<aMenus.length;i++){ var bIsRoot=false; for(var j=0;j<aRootMenus.length;j++){ var oThisItem=document.getElementById(aMenus[i].id.replace('navMenu_','navItem_')); if(aMenus[i].id==aRootMenus[j])bIsRoot=true; } if(bIsRoot && oThisItem)modClass(true,oThisItem,'hasSubMenuActive'); else modClass(false,oThisItem,'hasSubMenuActive'); if(!bIsRoot && aMenus[i].id!=sWhich)modClass(false,aMenus[i],'showMenu'); } modClass(true,oWhich,'showMenu'); var oItem=document.getElementById(sWhich.replace('navMenu_','navItem_')); if(oItem)modClass(true,oItem,'hasSubMenuActive'); } function hideMenu(sWhich){ if(sWhich=='all'){ /* loop backwards b/c WinIE6 has a bug with hiding display of an element when it's parent is already hidden */ for(var i=aMenus.length-1;i>=0;i--){ var oThisItem=document.getElementById(aMenus[i].id.replace('navMenu_','navItem_')); if(oThisItem)modClass(false,oThisItem,'hasSubMenuActive'); modClass(false,aMenus[i],'showMenu'); } }else{ var oWhich=document.getElementById(sWhich); if(oWhich)modClass(false,oWhich,'showMenu'); var oThisItem=document.getElementById(sWhich.replace('navMenu_','navItem_')); if(oThisItem)modClass(false,oThisItem,'hasSubMenuActive'); } } /* add or remove element className */ function modClass(bAdd,oElement,sClassName){ if(bAdd){/* add class */ if(oElement.className.indexOf(sClassName)==-1)oElement.className+=' '+sClassName; }else{/* remove class */ if(oElement.className.indexOf(sClassName)!=-1){ if(oElement.className.indexOf(' '+sClassName)!=-1)oElement.className=oElement.className.replace(' '+sClassName,''); else oElement.className=oElement.className.replace(sClassName,''); } } return oElement.className; /* return new className */ } //document.body.addEventListener("keydown",function(){keyNav(event);},true); function setBubble(oEvent){ oEvent.bubbles = true; } function keyNav(oElement,oEvent){ alert(oEvent.keyCode); window.status=oEvent.keyCode; return false; }

    Read the article

  • Why is my Pre to Postfix code not working?

    - by Anthony Glyadchenko
    For a class assignment, I have to use two stacks in C++ to make an equation to be converted to its left to right equivalent: 2+4*(3+4*8) -- 35*4+2 -- 142 Here is the main code: #include <iostream> #include <cstring> #include "ctStack.h" using namespace std; int main (int argc, char * const argv[]) { string expression = "2+4*2"; ctstack *output = new ctstack(expression.length()); ctstack *stack = new ctstack(expression.length()); bool previousIsANum = false; for(int i = 0; i < expression.length(); i++){ switch (expression[i]){ case '(': previousIsANum = false; stack->cmstackPush(expression[i]); break; case ')': previousIsANum = false; char x; while (x != '('){ stack->cmstackPop(x); output->cmstackPush(x); } break; case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': cout << "A number" << endl; previousIsANum = true; output->cmstackPush(expression[i]); break; case '+': previousIsANum = false; cout << "+" << endl; break; case '-': previousIsANum = false; cout << "-" << endl; break; case '*': previousIsANum = false; cout << "*" << endl; break; case '/': previousIsANum = false; cout << "/" << endl; break; default: break; } } char i = ' '; while (stack->ltopOfStack > 0){ stack->cmstackPop(i); output->cmstackPush(i); cout << i << endl; } return 0; } Here is the stack code (watch out!): #include <cstdio> #include <assert.h> #include <new.h> #include <stdlib.h> #include <iostream> class ctstack { private: long* lpstack ; // the stack itself long ltrue ; // constructor sets to 1 long lfalse ; // constructor sets to 0 // offset to top of the stack long lmaxEleInStack ; // maximum possible elements of stack public: long ltopOfStack ; ctstack ( long lnbrOfEleToAllocInStack ) { // Constructor lfalse = 0 ; // set to zero ltrue = 1 ; // set to one assert ( lnbrOfEleToAllocInStack > 0 ) ; // assure positive argument ltopOfStack = -1 ; // ltopOfStack is really an index lmaxEleInStack = lnbrOfEleToAllocInStack ; // set lmaxEleInStack to max ele lpstack = new long [ lmaxEleInStack ] ; // allocate stack assert ( lpstack ) ; // assure new succeeded } ~ctstack ( ) { // Destructor delete [ ] lpstack ; // Delete the stack itself } ctstack& operator= ( const ctstack& ctoriginStack) { // Assignment if ( this == &ctoriginStack ) // verify x not assigned to x return *this ; if ( this -> lmaxEleInStack < ctoriginStack . lmaxEleInStack ) { // if destination stack is smaller than delete [ ] this -> lpstack ; // original stack, delete dest and alloc this -> lpstack = // sufficient memory new long [ ctoriginStack . lmaxEleInStack ] ; assert ( this -> lpstack ) ; // assure new succeeded // reset stack size attribute this -> lmaxEleInStack = ctoriginStack . lmaxEleInStack ; } // copy original to destination stack for ( long i = 0 ; i < ctoriginStack . lmaxEleInStack ; i ++ ) *( this -> lpstack + i ) = *( ctoriginStack . lpstack + i ) ; this -> ltopOfStack = ctoriginStack . ltopOfStack ; // reset stack position attribute return *this ; } long cmstackPush (char lplaceInStack ) { // Push Method if ( ltopOfStack == lmaxEleInStack - 1 ) // stack is full can't add element return lfalse ; ltopOfStack ++ ; // acquire free slot *(lpstack + ltopOfStack ) = lplaceInStack ; // add element return ltrue ; // any number other than zero is true } long cmstackPop (char& lretrievedStackEle ) { // Pop Method if ( ltopOfStack < 0 ) { // stack has no elements lretrievedStackEle = -1 ; // dummy element return lfalse ; } lretrievedStackEle = *( lpstack + ltopOfStack ) ; // stack has element -- return it ltopOfStack -- ; // stack is pop'd return ltrue ; // any number other than zero is true } long cmstackLookAtTop (char& lretrievedStackEle ) { // Pop Method if ( ltopOfStack < 0 ) { // stack has no elements lretrievedStackEle = -1 ; // dummy element return lfalse ; } lretrievedStackEle = *( lpstack + ltopOfStack ) ; // stack has element -- return it return ltrue ; // any number other than zero is true } long cmstackHasAnEle (char& lretrievedTopOfStack ) { // Has element method lretrievedTopOfStack = ltopOfStack ; return ltopOfStack < 0 ? lfalse : ltrue ; // 0 - false stack does not have any ele } // 1 - true stack has at least one element long cmstackMaxNbrOfEle (char& lretrievedMaxStackEle ) { // Maximum element method lretrievedMaxStackEle = lmaxEleInStack ; // return stack size in reference var return ltrue ; // Return Maximum Size of Stack } } ; Thanks, Anthony.

    Read the article

  • png image store in database and retrieve in android 1.5

    - by hany
    hai, I am new to android. I have problem. This is my code but it will not work, the problem is in view binder. Please correct it. // this is my activity package com.android.Fruits2; import java.util.ArrayList; import java.util.HashMap; import android.app.ListActivity; import android.database.Cursor; import android.graphics.Bitmap; import android.graphics.BitmapFactory; import android.os.Bundle; import android.widget.SimpleAdapter; import android.widget.SimpleCursorAdapter; import android.widget.SimpleAdapter.ViewBinder; public class Fruits2 extends ListActivity { private DBhelper mDB; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // setContentView(R.layout.main); mDB = new DBhelper(this); mDB.Reset(); Bitmap img = BitmapFactory.decodeResource(getResources(), R.drawable.icon); mDB.createPersonEntry(new PersonData(img, "Harsha", 24,"mca")); String[] columns = {mDB.KEY_ID, mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}; String table = mDB.PERSON_TABLE; Cursor c = mDB.getHandle().query(table, columns, null, null, null, null, null); startManagingCursor(c); SimpleCursorAdapter adapter = new SimpleCursorAdapter(this, R.layout.data, c, new String[] {mDB.KEY_IMG, mDB.KEY_NAME, mDB.KEY_AGE, mDB.KEY_STUDY}, new int[] {R.id.img, R.id.name, R.id.age,R.id.study}); adapter.setViewBinder( new MyViewBinder()); setListAdapter(adapter); } } //my viewbinder package com.android.Fruits2; import android.database.Cursor; import android.graphics.BitmapFactory; import android.view.View; import android.widget.ImageView; import android.widget.SimpleCursorAdapter; public class MyViewBinder implements SimpleCursorAdapter.ViewBinder { public boolean setViewValue(View view, Cursor cursor, int columnIndex) { if( (view instanceof ImageView) ) { ImageView iv = (ImageView) view; byte[] img = cursor.getBlob(columnIndex); iv.setImageBitmap(BitmapFactory.decodeByteArray(img, 0, img.length)); return true; } return false; } } // data package com.android.Fruits2; import android.graphics.Bitmap; public class PersonData { private Bitmap bmp; private String name; private int age; private String study; public PersonData(Bitmap b, String n, int k, String v) { bmp = b; name = n; age = k; study = v; } public Bitmap getBitmap() { return bmp; } public String getName() { return name; } public int getAge() { return age; } public String getStudy() { return study; } } //dbhelper package com.android.Fruits2; import java.io.ByteArrayOutputStream; import android.content.ContentValues; import android.content.Context; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; import android.graphics.Bitmap; import android.provider.BaseColumns; public class DBhelper { public static final String KEY_ID = BaseColumns._ID; public static final String KEY_NAME = "name"; public static final String KEY_AGE = "age"; public static final String KEY_STUDY = "study"; public static final String KEY_IMG = "image"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "PersonalDB"; private static final int DATABASE_VERSION = 1; public static final String PERSON_TABLE = "Person"; private static final String CREATE_PERSON_TABLE = "create table "+PERSON_TABLE+" (" +KEY_ID+" integer primary key autoincrement, " +KEY_IMG+" blob not null, " +KEY_NAME+" text not null , " +KEY_AGE+" integer not null, " +KEY_STUDY+" text not null);"; private final Context mCtx; private boolean opened = false; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } public void onCreate(SQLiteDatabase db) { db.execSQL(CREATE_PERSON_TABLE); } public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { db.execSQL("DROP TABLE IF EXISTS "+PERSON_TABLE); onCreate(db); } } public void Reset() { openDB(); mDbHelper.onUpgrade(this.mDb, 1, 1); closeDB(); } public DBhelper(Context ctx) { mCtx = ctx; mDbHelper = new DatabaseHelper(mCtx); } private SQLiteDatabase openDB() { if(!opened) mDb = mDbHelper.getWritableDatabase(); opened = true; return mDb; } public SQLiteDatabase getHandle() { return openDB(); } private void closeDB() { if(opened) mDbHelper.close(); opened = false; } public void createPersonEntry(PersonData about) { openDB(); ByteArrayOutputStream out = new ByteArrayOutputStream(); about.getBitmap().compress(Bitmap.CompressFormat.PNG, 100, out); ContentValues cv = new ContentValues(); cv.put(KEY_IMG, out.toByteArray()); cv.put(KEY_NAME, about.getName()); cv.put(KEY_AGE, about.getAge()); cv.put(KEY_STUDY, about.getStudy()); mDb.insert(PERSON_TABLE, null, cv); closeDB(); } } //data.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="wrap_content"> <ImageView android:id = "@+id/img" android:layout_width = "wrap_content" android:layout_height = "wrap_content" > </ImageView> <TextView android:id = "@+id/name" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" > </TextView> <TextView android:id = "@+id/age" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> <TextView android:id = "@+id/study" android:layout_width = "wrap_content" android:layout_height = "wrap_content" android:textSize="15dp" android:textColor="#ff0000" /> </LinearLayout> When I run this in android 1.6 and 2.1, it works. But when I run in android 1.5, not work. My application is android 1.5. Please correct and send code to me. Thank you.

    Read the article

  • whats wrong with the following code

    - by giri
    Hi i am trying to send sms to my mobile using java.When I run the application I am getting the the follwing error. package HelloWorld; import java.io.*; import java.util.BitSet; import javax.comm.*; import java.lang.*; public class SerialToGsm { InputStream in; OutputStream out; String lastIndexRead; String senderNum; String smsMsg; SerialToGsm(String porta) { try { // CommPortIdentifier portId = CommPortIdentifier.getPortIdentifier("serial0"); CommPortIdentifier portId = CommPortIdentifier.getPortIdentifier(porta); SerialPort sp = (SerialPort)portId.open("Sms_GSM", 0); sp.setSerialPortParams(9600, SerialPort.DATABITS_8, SerialPort.STOPBITS_1, SerialPort.PARITY_NONE); sp.setFlowControlMode(sp.FLOWCONTROL_NONE); in = sp.getInputStream(); out = sp.getOutputStream(); // modem reset sendAndRecv("+++AT", 30); // delay for 20 sec/10 sendAndRecv("AT&F", 30); sendAndRecv("ATE0", 30); // echo off sendAndRecv("AT +CMEE=1", 30); // verbose error messages sendAndRecv("AT+CMGF=0", 70); // set pdu mode // sendAndRecv("AT V1E0S0=0&D2&C1", 1000000); } catch (Exception e) { System.out.println("Exception " + e); System.exit(1); } } private String sendAndRecv(String s, int timeout) { try { // clean serial port input buffer in.skip(in.available()); System.out.println("=> " + s); s = s + "\r"; // add CR out.write(s.getBytes()); out.flush(); String strIn = new String(); for (int i = 0; i < timeout; i++){ int numChars = in.available(); if (numChars > 0) { byte[] bb = new byte[numChars]; in.read(bb,0,numChars); strIn += new String(bb); } // start exit conditions // --------------------- if (strIn.indexOf(">\r\n") != -1) { break; } if (strIn.indexOf("OK\r\n") != -1){ break; } if (strIn.indexOf("ERROR") != -1) { // if find 'error' wait for CR+LF if (strIn.indexOf("\r\n",strIn.indexOf("ERROR") + 1) != -1) { break; } } Thread.sleep(100); // delay 1/10 sec } System.out.println("<= " + strIn); if (strIn.length() == 0) { return "ERROR: len 0"; } return strIn; } catch (Exception e) { System.out.println("send e recv Exception " + e); return "ERROR: send e recv Exception"; } } public String sendSms (String numToSend, String whatToSend) { ComputSmsData sms = new ComputSmsData(); sms.setAsciiTxt(whatToSend); sms.setTelNum(numToSend); // sms.setSMSCTelNum("+393359609600"); // SC fixed String s = new String(); s = sendAndRecv("AT+CMGS=" + (sms.getCompletePduData().length() / 2) + "\r", 30); // System.out.println("==> AT+CMGS=" + (sms.getCompletePduData().length() / 2)); // System.out.println("<== " + s); if (s.indexOf(">") != -1) { // s = sendAndRecv(sms.getSMSCPduData() + sms.getCompletePduData() + "\u001A"); // usefull one day? // System.out.println("Inviero questo >>>> " + sms.getCompletePduData()); // if this sintax won't work try remove 00 prefix s = sendAndRecv("00" + sms.getCompletePduData() + "\u001A", 150); // System.out.println("<== " + s); return s; } else { return "ERROR"; } } // used to reset message data private void resetGsmObj() { lastIndexRead = null; senderNum = null; smsMsg = null; } public String checkSms (){ String str = new String(); String strGsm = new String(); strGsm = sendAndRecv("AT+CMGL=0", 30); // list unread msg and sign them as read // if answer contain ERROR then ERROR if (strGsm.indexOf("ERROR") != -1) { resetGsmObj(); return strGsm; // error } strGsm = sendAndRecv("AT+CMGL=1", 30); // list read msg // if answer contain ERROR then ERROR if (strGsm.indexOf("ERROR") != -1) { resetGsmObj(); return strGsm; // error } // evaluate message index if (strGsm.indexOf(':') <= 0) { resetGsmObj(); return ("ERROR unexpected answer"); } str = strGsm.substring(strGsm.indexOf(':') + 1,strGsm.indexOf(',')); str = str.trim(); // remove white spaces // System.out.println("Index: " + str); lastIndexRead = str; // find message string // ------------------- // look for start point (search \r, then skip \n, add and one more for right char int startPoint = strGsm.indexOf("\r",(strGsm.indexOf(":") + 1)) + 2; int endPoint = strGsm.indexOf("\r",startPoint + 1); if (endPoint == -1) { // only one message endPoint = strGsm.length(); } // extract string str = strGsm.substring(startPoint, endPoint); System.out.println("String to be decoded :" + str); ComputSmsData sms = new ComputSmsData(); sms.setRcvdPdu(str); // SMSCNum = new String(sms.getRcvdPduSMSC()); senderNum = new String(sms.getRcvdSenderNumber()); smsMsg = new String(sms.getRcvdPduTxt()); System.out.println("SMSC number: " + sms.getRcvdPduSMSC()); System.out.println("Sender number: " + sms.getRcvdSenderNumber()); System.out.println("Message: " + sms.getRcvdPduTxt()); return "OK"; } public String readSmsSender() { return senderNum; } public String readSms() { return smsMsg; } public String delSms() { if (lastIndexRead != "") { return sendAndRecv("AT+CMGD=" + lastIndexRead, 30); } return ("ERROR"); } } ERROR: Error loading SolarisSerial: java.lang.UnsatisfiedLinkError: no SolarisSerialParallel in java.library.path Caught java.lang.UnsatisfiedLinkError: com.sun.comm.SolarisDriver.readRegistrySerial(Ljava/util/Vector;Ljava/lang/String;)I while loading driver com.sun.comm.SolarisDriver Exception javax.comm.NoSuchPortException

    Read the article

  • firebug saying not a function

    - by Aaron
    <script type = "text/javascript"> var First_Array = new Array(); function reset_Form2() {document.extraInfo.reset();} function showList1() {document.getElementById("favSports").style.visibility="visible";} function showList2() {document.getElementById("favSubjects").style.visibility="visible";} function hideProceed() {document.getElementById('proceed').style.visibility='hidden';} function proceedToSecond () { document.getElementById("div1").style.visibility="hidden"; document.getElementById("div2").style.visibility="visible"; document.getElementById("favSports").style.visibility="hidden"; document.getElementById("favSubjects").style.visibility="hidden"; } function backToFirst () { document.getElementById("div1").style.visibility="visible"; document.getElementById("div2").style.visibility="hidden"; document.getElementById("favSports").style.visibility="visible"; document.getElementById("favSubjects").style.visibility="visible"; } function reset_Form(){ document.personalInfo.reset(); document.getElementById("favSports").style.visibility="hidden"; document.getElementById("favSubjects").style.visibility="hidden"; } function isValidName(firstStr) { var firstPat = /^([a-zA-Z]+)$/; var matchArray = firstStr.match(firstPat); if (matchArray == null) { alert("That's a weird name, try again"); return false; } return true; } function isValidZip(zipStr) { var zipPat =/[0-9]{5}/; var matchArray = zipStr.match(zipPat); if(matchArray == null) { alert("Zip is not in valid format"); return false; } return true; } function isValidApt(aptStr) { var aptPat = /[\d]/; var matchArray = aptStr.match(aptPat); if(matchArray == null) { if (aptStr=="") { return true; } alert("Apt is not proper format"); return false; } return true; } function isValidDate(dateStr) { //requires 4 digit year: var datePat = /^(\d{1,2})(\/|-)(\d{1,2})\2(\d{4})$/; var matchArray = dateStr.match(datePat); if (matchArray == null) { alert("Date is not in a valid format."); return false; } return true; } function checkRadioFirst() { var rb = document.personalInfo.salutation; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please specify a salutation"); return false; } function checkCheckFirst() { var rb = document.personalInfo.operatingSystems; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please specify an operating system") ; return false; } function checkSelectFirst() { if ( document.personalInfo.sports.selectedIndex == -1) { alert ( "Please select a sport" ); return false; } return true; } function checkRadioSecond() { var rb = document.extraInfo.referral; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please select form of referral"); return false; } function checkCheckSecond() { var rb = document.extraInfo.officeSupplies; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { return true; } } alert("Please select an office supply option"); return false; } function checkSelectSecond() { if ( document.extraInfo.colorPick.selectedIndex == 0 ) { alert ( "Please select a favorite color" ); return false; } return true; } function check_Form(){ var retvalue = isValidDate(document.personalInfo.date.value); if(retvalue) { retvalue = isValidZip(document.personalInfo.zipCode.value); if(retvalue) { retvalue = isValidName(document.personalInfo.nameFirst.value); if(retvalue) { retvalue = checkRadioFirst(); if(retvalue) { retvalue = checkCheckFirst(); if(retvalue) { retvalue = checkSelectFirst(); if(retvalue) { retvalue = isValidApt(document.personalInfo.aptNum.value); if(retvalue){ document.getElementById('proceed').style.visibility='visible'; var rb = document.personalInfo.salutation; for(var i=0;i<rb.length;i++) { if(rb[i].checked) { var salForm = rb[i].value; } } var SportsOptions = ""; for(var j=0;j<document.personalInfo.sports.length;j++){ if ( document.personalInfo.sports.options[j].selected){ SportsOptions += document.personalInfo.sports.options[j].value + " "; } } var SubjectsOptions= ""; for(var k=0;k<document.personalInfo.subjects.length;k++){ if ( document.personalInfo.subjects.options[k].selected){ SubjectsOptions += document.personalInfo.subjects.options[k].value + " "; } } var osBox = document.personalInfo.operatingSystems; var OSOptions = ""; for(var y=0;y<osBox.length;y++) { if(osBox[y].checked) { OSOptions += osBox[y].value + " "; } } First_Array[0] = salForm; First_Array[1] = document.personalInfo.nameFirst.value; First_Array[2] = document.personalInfo.nameMiddle.value; First_Array[3] = document.personalInfo.nameLast.value; First_Array[4] = document.personalInfo.address.value; First_Array[5] = document.personalInfo.aptNum.value; First_Array[6] = document.personalInfo.city.value; for(var l=0; l<document.personalInfo.state.length; l++) { if (document.personalInfo.state.options[l].selected) { First_Array[7] = document.personalInfo.state[l].value; } } First_Array[8] = document.personalInfo.zipCode.value; First_Array[9] = document.personalInfo.date.value; First_Array[10] = document.personalInfo.phone.value; First_Array[11] = SportsOptions; First_Array[12] = SubjectsOptions; First_Array[13] = OSOptions; alert("Everything looks good."); document.getElementById('validityButton').style.visibility='hidden'; } } } } } } } } /*function formAction2() { var retvalue; retvalue = checkRadioSecond(); if(!retvalue) { return retvalue; } retvalue = checkCheckSecond(); if(!retvalue) { return retvalue; } return checkSelectSecond() ; } */ </script> This is just a sample of the code, there are alot more functions, but I thought the error might be related to surrounding code. I have absolutely no idea why, as I know all the surrounding functions execute, and First_Array is populated. However when I click the Proceed to Second button, the onclick attribute does not execute because Firebug says proceedToSecond is not a function button code: <input type="button" id="proceed" name="proceedToSecond" onclick="proceedToSecond();" value="Proceed to second form">

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >