Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1518/3281 | < Previous Page | 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525  | Next Page >

  • Oracle OpenWorld Highlights

    - by Doug Reid
    We are in the final days of Oracle OpenWorld 2012 and the data integration team have been hard at work giving sessions, meeting customers, demonstrating product and conducting hands-on labs.    It has been a great conference, but the best part is meeting our customers and learning about all the great implementations of our products.  Wednesday was the last day that the exhibition hall was open and attendees were getting in their final opportunities to see our products and meet with the product management team.   Two hours before the close of the hall, people lined up to learn about GoldenGate 11gR2, Monitor, Adapters, Veridata, and all the different use cases.    Here's a picture of Sjaak Vossepoel, who is our DIS Sales Consulting Manager for EMEA speaking to a potential customer on the options of using Oracle GoldenGate for heterogenous data replication.  Over the last two days, the GoldenGate team ran two labs; Introduction to Oracle GoldenGate Veridata and Deep Dive into Oracle GoldenGate.   Both of the labs were completely booked out and unfortunately we had to turn away people.   BUT,  all of our labs were recorded recently so if you were not able to get into the lab or did not have enough time to complete your labs, visit youtube.com/oraclegoldengate to see a  complete recording of the labs we used at OpenWorld plus more.  Here are a couple pictures from the Deep Dive into Oracle GoldenGate lead by Chis Lawless from the Product Management team.   Thanks to the GoldenGate Hands-on Lab team for putting on a great session!!! We will post more information about where you can find additional details on OpenWorld as they become public.   

    Read the article

  • What should be the path for storing Maildir e-mails?

    - by Thufir
    Am I storing e-mails to the correct path? Working from the dovecot-postfix package I'm able to deliver e-mails to myself as so: thufir@dur:~$ thufir@dur:~$ telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) HELO me 250 dur.bounceme.net mail from:<[email protected]> 250 2.1.0 Ok rcpt to:<thufir@localhost> 250 2.1.5 Ok data 354 End data with <CR><LF>.<CR><LF> subject: to evolution mail we'll see if this goes through. . 250 2.0.0 Ok: queued as 43D6F2A07C1 quit 221 2.0.0 Bye Connection closed by foreign host. thufir@dur:~$ and then here's the message: thufir@dur:~$ ll Maildir/new/ total 20 drwx------ 2 thufir thufir 4096 Nov 16 18:56 ./ drwx------ 5 thufir thufir 4096 Nov 16 18:56 ../ -rw------- 1 thufir thufir 410 Nov 16 11:57 1353095866.M305477P3932.dur,S=410,W=422 -rw------- 1 thufir thufir 424 Nov 16 17:20 1353115248.M841336P2990.dur,S=424,W=436 -rw------- 1 thufir thufir 445 Nov 16 18:56 1353121003.M187706P3838.dur,S=445,W=457 thufir@dur:~$ thufir@dur:~$ nl Maildir/new/1353121003.M187706P3838.dur\,S\=445\,W\=457 1 Return-Path: <[email protected]> 2 X-Original-To: thufir@localhost 3 Delivered-To: thufir@localhost 4 Received: from me (localhost [127.0.0.1]) 5 by dur.bounceme.net (Postfix) with SMTP id 43D6F2A07C1 6 for <thufir@localhost>; Fri, 16 Nov 2012 18:55:55 -0800 (PST) 7 subject: to evolution mail 8 Message-Id: <[email protected]> 9 Date: Fri, 16 Nov 2012 18:55:55 -0800 (PST) 10 From: [email protected] 11 we'll see if this goes through. thufir@dur:~$ Do I perhaps have postfix misconfigured? I ask because evolution seems to use a different path for mail.

    Read the article

  • LINQ to Twitter Maintenance Feedback

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/06/16/linq-to-twitter-maintenance-feedback.aspxIt’s always fun to receive positive feedback on your work. If you receive a sufficient amount of positive feedback, you know you’re doing something right. Sometimes, people provide negative feedback too. There are a couple ways to handle it: come back fighting or engage for clarification. The way you handle the negative feedback depends on what your goals are. Feedback Approaches If you know the feedback is incorrect and you need to promote your idea or product, you might want to come back fighting. The feedback might just be comments by a troll or competitor wanting to spread FUD. However, this could be the totally wrong approach if you misjudge the source and intentions of the feedback. In a lot of cases, feedback is a golden opportunity. Sometimes, a problem exists that you either don’t know about or don’t realize the true impact of the problem. If you decide to come back fighting, you might loose the opportunity to learn something new. However, if you engage the person providing the feedback, looking for clarification, you might learn something very important. Negative feedback and it’s clarification can lead to the collection of useful and actionable data. In my case, something that prompted this blog post, I noticed someone who tweeted a negative comment about LINQ to Twitter. Normally, any less than stellar comments are usually from folks that need help – so I help if I can. This was different. I was like “Don’t use LINQ to Twitter”. This is an open source project, the comment didn’t come from a competing project, and  sounded more like an expression of frustration. So I engaged. Not only did the person respond, but I got some decent quality feedback. What’s also interesting is a couple other side conversations sprouted on the subject, which gave me more useful data. LINQ to Twitter Thread Actions Essentially, this particular issue centered around maintenance. There are actually several sub-issues at play here: dependencies, error handling, debugging, and visibility. I’ll describe each one and my interpretation. Dependencies Dependencies are where a library has references to other libraries. This means that when you build your application, you need DLLs for the entire dependency graph for your application. There are several potential problems with this that include more libraries for configuration management, potential versioning mismatches, and lack of cross-platform support. In the early days of LINQ to Twitter, I allowed developers to contribute and add dependencies, but it became very problematic (for reasons stated). It was like a ball and chain that kept me from moving forward. So, I refactored and pulled other open-source into my project to eliminate external dependencies. This lets me fix the code in my project without relying on someone else to upgrade or fix their DLL. The motivation for this was from early negative feedback that translated as important data and acted on it. Today, LINQ to Twitter has zero dependencies. Note: Rejecting good code from community members who worked hard to make your project better is a painful experience in itself. I have to point out that any contribution was not in vain because they had a positive influence on my subsequent refactoring that resulted in a better developer experience. Error Handling Error handling has been a problem in the past. I have this combination of supporting both synchronous and asynchronous (APM) processing that can be complex at times. Within the last 6 months, I did a fair amount of refactoring to detect errors and process them properly. I also refactored TwitterQueryException so it includes important data from Twitter. During this refactoring, I’ve made breaking changes that I felt would improve the development experience (small things like renaming a callback property to Exception, rather than Error). I think the async error handling is much better than it was a year ago. For all the work I’ve done, there is more to do. I think that a combination of more error handling support, e.g. improving semantics, and education through documentation and samples will improve the error handling story. Because of what I’ve done so far, it isn’t bad, but I see opportunities for improvement. Debugging Debugging can be painful. Here’s why: you have multiple layers of technology to navigate and figure out where the real problem is – Twitter API, Security, HTTP, LINQ to Twitter, and application. You can probably add your own nuances to that list, but the point is that debugging in this environment can be complex. I think that my plans for error handling will contribute to making the debugging process easier. However, there’s more I can do in the way of documentation and guidance. Some of the questions to be answered revolve around when something goes wrong, how does the developer figure out that there is a problem, what the problem is, and what to do about it. One example that has gone a long way to helping LINQ to Twitter developers is the 401 FAQ. A 401 Unauthorized is the error that the Twitter API returns when a use isn’t able to authenticate and is one of the most difficult problems faced by LINQ to Twitter developers. What I did was read guidance from Twitter and collect techniques from my own development and actions helping other developers to compile an extensive list of reasons for the 401 and ways to fix the problem. At one time, over half of the questions I answered in the forums were to help solve 401 issues. After publishing the 401 FAQ, I rarely get a 401 question and it’s because the person didn’t know about the FAQ. If the person is too lazy to read the FAQ, that’s not my issue, but the results in support issues have been dramatic. I think debugging can benefit from the education and documentation approach, but I’m always open to suggestions on whatever else I can do. Visibility Visibility is a nuance of the error handling/debugging discussion but is deeply rooted in comfort and control. The questions to ask in this area are what is happening as my code runs and how testable is the code. In support of these areas, LINQ to Twitter does have logging and TwitterContext properties that help see what’s happening on requests. The logging functionality allows any developer to connect a TextWriter to the Log property of TwitterContext to see what’s happening. Further, TwitterContext has a Headers property to see the headers Twitter returns and a RawResults property to show the Json string Twitter returns. From a testing perspective, I’ve been able to write hundreds of unit tests, over 600 when this post is published, and growing. If you write your own library, you have full control over all of these aspects. The tradeoff here is that while you have access to the LINQ to Twitter source code and modify it for all the visibility, LINQ to Twitter *will* change (which is good) and you will have to figure out how to merge that with your changes (which is hard). The fact is that this is a limitation of any 3rd party library, not just LINQ to Twitter. So, it’s a design decision where the tradeoff is between control and productivity. That said, there are things I can do with LINQ to Twitter to make the visibility story more compelling. I think there are opportunities to improve diagnostics. This would be a ton of work because it would need to provide multi-level logging that can be tuned for production and support any logging provider you want to attach. I’ve considered approaches such as how the new Semantic Logging application block connects to Windows Error Reporting as a potential target. Whatever I do would need to be extensible without creating native external dependencies. e.g. how many 3rd party libraries force a dependency on a logging framework that you don’t use. So, this won’t be an easy feat, but I believe it can be part of the roadmap. I think that a lot of developers are unaware of existing visibility features, so the first step would be to provide more documentation and guidance. My thought are that this would lead to more feedback that will help improve this area. Summary Recent feedback highlights some of items that are important to LINQ to Twitter developers, such as dependencies, error handling, debugging, and visibility. I know that there are maintenance issues that have been problems for LINQ to Twitter developers in the past. I’ve done a lot of work in this area, such as improving error handling, adding visibility features, and providing extensive API documentation. That said, there is more to be done to make LINQ to Twitter the best Twitter API experience available for .NET developers and I welcome anyone’s thoughts on what I’ve written here or new improvements. @JoeMayo

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • I'm doing 90% maintenance and 10% development, is this normal?

    - by TiredProgrammer
    I have just recently started my career as a web developer for a medium sized company. As soon as I started I got the task of expanding an existing application (badly coded, developed by multiple programmers over the years, handles the same tasks in different ways, zero structure) So after I had successfully extended this application with the requested functionality, they gave me the task to fully maintain the application. This was of course not a problem, or so I thought. But then I got to hear I wasn't allowed to improve the existing code and to only focus on bug fixes when a bug gets reported. From then on I have had 3 more projects just like the above, that I now also have to maintain. And I got 4 projects where I was allowed to create the application from scratch, and I have to maintain those as well. At this moment I'm slightly beginning to get crazy from the daily mails of users (read managers) for each application I have to maintain. They expect me to handle these mails directly while also working on 2 other new projects (and there are already 5 more projects lined up after those). The sad thing is I have yet to receive a bug report on anything that I have coded myself, for that I have only received the occasional lets do things 180 degrees different change requests. Anyway, is this normal? In my opinion I'm doing the work equivalent of a whole team of developers. Was I an idiot when I initially expected things to be different? I guess this post has turned into a big rant, but please tell me that this is not the same for every developer. P.S. My salary is almost equal if not lower then that of a cashier at a supermarket.

    Read the article

  • Why is quicksort better than other sorting algorithms in practice?

    - by Raphael
    This is a repost of a question on cs.SE by Janoma. Full credits and spoils to him or cs.SE. In a standard algorithms course we are taught that quicksort is O(n log n) on average and O(n²) in the worst case. At the same time, other sorting algorithms are studied which are O(n log n) in the worst case (like mergesort and heapsort), and even linear time in the best case (like bubblesort) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates).

    Read the article

  • Announcing Two Papers Addressing the RPAS Fusion Client

    - by Oracle Retail Documentation Team
    Oracle Retail has published two documents to My Oracle Support addressing the Retail Predictive Application Server (RPAS) Fusion Client, a web-based rich client developed using the latest Oracle Application Development Framework (ADF). The Fusion Client provides an enhanced user experience for communicating with the RPAS server. Oracle Retail Predictive Application Server Fusion Client Getting Started Guide Doc ID 1492759.1The Retail Predictive Application Server (RPAS) is a configurable platform that provides capabilities such as a multidimensional database structure, batch and online processing, a configurable user interface, a configurable calculation engine, user security, and utility functions such as importing and exporting, all on a highly scalable technical environment that can be deployed on a variety of hardware. This paper addresses typical questions that arise during setting up and deploying the Fusion Client, provides performance recommendations, and highlights the differences between the Classic Client and the Fusion Client. Oracle Retail RPAS Fusion Client Performance Issue Report Doc ID 1493747.1Performance issues can be frustrating for customers, and Oracle Retail will strive to assist you as you attempt to enhance the performance of your systems. To ensure the timeliest processing of your issue, retailers and partners are encouraged to respond as thoroughly as possible to each question in this document, which can be sent back for analysis by logging a Service Request and following typical Customer Support processes. The sections of the document solicit information about the following: Performance Issue Description Performance Issue Details System Configuration Data Application Configuration Data Performance Log Files

    Read the article

  • Using ConcurrentQueue for thread-safe Performance Bookkeeping.

    - by Strenium
    Just a small tidbit that's sprung up today. I had to book-keep and emit diagnostics for the average thread performance in a highly-threaded code over a period of last X number of calls and no more. Need of the day: a thread-safe, self-managing stats container. Since .NET 4.0 introduced new thread-safe 'Collections.Concurrent' objects and I've been using them frequently - the one in particular seemed like a good fit for storing each threads' performance data - ConcurrentQueue. But I wanted to store only the most recent X# of calls and since the ConcurrentQueue currently does not support size constraint I had to come up with my own generic version which attempts to restrict usage to numeric types only: unfortunately there is no IArithmetic-like interface which constrains to only numeric types – so the constraints here here aren't as elegant as they could be. (Note the use of the Average() method, of course you can use others as well as make your own).   FIFO FixedSizedConcurrentQueue using System;using System.Collections.Concurrent;using System.Linq; namespace xxxxx.Data.Infrastructure{    [Serializable]    public class FixedSizedConcurrentQueue<T> where T : struct, IConvertible, IComparable<T>    {        private FixedSizedConcurrentQueue() { }         public FixedSizedConcurrentQueue(ConcurrentQueue<T> queue)        {            _queue = queue;        }         ConcurrentQueue<T> _queue = new ConcurrentQueue<T>();         public int Size { get { return _queue.Count; } }        public double Average { get { return _queue.Average(arg => Convert.ToInt32(arg)); } }         public int Limit { get; set; }        public void Enqueue(T obj)        {            _queue.Enqueue(obj);            lock (this)            {                T @out;                while (_queue.Count > Limit) _queue.TryDequeue(out @out);            }        }    } }   The usage case is straight-forward, in this case I’m using a FIFO queue of maximum size of 200 to store doubles to which I simply Enqueue() the calculated rates: Usage var RateQueue = new FixedSizedConcurrentQueue<double>(new ConcurrentQueue<double>()) { Limit = 200 }; /* greater size == longer history */   That’s about it. Happy coding!

    Read the article

  • When does a programmer know when a new job is not right?

    - by Mysterion
    I believe that the interview process is a selling of both parties - what can the employee offer the employer and vice versa. Assuming an individual has been careful in selecting their new employer (via thorough questioning in the interview process), however when they arrive at the job they find the employer has not been honest about certain aspects of the job. Examples of this dishonesty could include: The employee making it clear that technical excellence is an important factor, which is promised by the employer, but is not fully delivered or a good technical structure does not exist. The employee states they want to work on well architected and short (lets say less than 1 yr) long projects, yet when they start they find they are placed on a poorly architected older project. The employee being told of a pair programming environment to get him up to speed on the project, but being left to his own devices/questioning on arrival. The employee is promised a culture that encourages innovation and technical excellence but finds that this is not the case (eg. using technology for knowledge retention is laughed at). I know that a lot of famous developers feel that you make the place you work at. Is it realistic for a new employee with limited experience in the industry (say less than 5 years) to be able to join the company and change attitudes or even challenge the employer on the perceived dishonesty? Should they stay in this job or cut their losses?

    Read the article

  • Implementing my Entity System. Questions about some problems I have found.

    - by Notbad
    Hi!, Well during this week I have deciding about implementation of my entity system. It is a big topic so it has been difficult to take one option from the whole. This has been my decision: 1) I don't have an entity class it is just an id. 2) I have systems that contain a list of components (the list is homegenous, I mean, RenderSystem will just have RenderComponents). 3) Compones will be just data. 4) There would be some kind of "entity prototypes" in a manager or something from we will create entity instances.Ideally they will define the type of components it has and initialization data. 5) Prototype code to create an entity (this is from the top of my head): int id=World::getInstance()->createEntity("entity template"); 6) This will notify all systems that a new entity has been created, and if the entity needs a component that the system handles it will add it to the entity. Ok, this are the ideas. Let's see if some can help with the problems: 1) The main problem is this templates that are sent to the systems in creation process to populate the entity with needed components. What would you use, an OR(ed) int?, a list of strings?. 2) How to do initialization for components when the entity has been created? How to store this in the template? I have thought about having a function in the template that is virtual and after entity is created an populated, gets the components and sets initialization values. 3) Don't you think this is a lot of work for just an entity creation?. Sorry for the long post, I have tried to expose my ideas and finding in order other could have a start beside exposing my problems. Thanks in advance, Notbad.

    Read the article

  • How to mount drive in /media/userName/ like nautilus do using udisks

    - by Bsienn
    As of my current installation of Ubuntu 13.10 Unity, when i click on a drive in nautilus it get mounted in /media/username/mountedDrive i read that nautilus use udisks to do that. Basically i want to auto mount my drive using udisks in start up using this method But problem is, it mounts the drive in /media/mountedDrive, but i want it the way nautilus do in /media/username/mounteDrive I want NTFS Data drive to be auto mounted at /media/bsienn/ bsienn@bsienn-desktop:~$ blkid /dev/sda1: LABEL="System Reserved" UUID="8230744030743D6B" TYPE="ntfs" /dev/sda2: LABEL="Windows 7" UUID="60100EA5100E81F0" TYPE="ntfs" /dev/sda3: LABEL="Data" UUID="882C04092C03F14C" TYPE="ntfs" /dev/sda5: UUID="8768800f-59e1-41a2-9092-c0a8cb60dabf" TYPE="swap" /dev/sda6: LABEL="Ubuntu Drive" UUID="13ea474a-fb27-4c91-bae7-c45690f88954" TYPE="ext4" /dev/sda7: UUID="69c22e73-9f64-4b48-b854-7b121642cd5d" TYPE="ext4" bsienn@bsienn-desktop:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160000000000 bytes 255 heads, 63 sectors/track, 19452 cylinders, total 312500000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8d528d52 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 117730069 58761611 7 HPFS/NTFS/exFAT /dev/sda3 158690072 312494116 76902022+ 7 HPFS/NTFS/exFAT /dev/sda4 117731326 158689279 20478977 5 Extended /dev/sda5 137263104 141260799 1998848 82 Linux swap / Solaris /dev/sda6 141262848 158689279 8713216 83 Linux /dev/sda7 117731328 137263103 9765888 83 Linux Partition table entries are not in disk order bsienn@bsienn-desktop:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda7 during installation UUID=69c22e73-9f64-4b48-b854-7b121642cd5d / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=8768800f-59e1-41a2-9092-c0a8cb60dabf none swap sw 0 0 Desired effect: Picture link

    Read the article

  • Good design for class with similar constructors

    - by RustyTheBoyRobot
    I was reading this question and thought that good points were made, but most of the solutions involved renaming one of the methods. I am refactoring some poorly written code and I've run into this situation: public class Entity { public Entity(String uniqueIdentifier, boolean isSerialNumber) { if (isSerialNumber) { this.serialNumber = uniqueIdentifier; //Lookup other data } else { this.primaryKey = uniqueIdentifier; // Lookup other data with different query } } } The obvious design flaw is that someone needed two different ways to create the object, but couldn't overload the constructor since both identifiers were of the same type (String). Thus they added a flag to differentiate. So, my question is this: when this situation arises, what are good designs for differentiating between these two ways of instantiating an object? My First Thoughts You could create two different static methods to create your object. The method names could be different. This is weak because static methods don't get inherited. You could create different objects to force the types to be different (i.e., make a PrimaryKey class and a SerialNumber class). I like this because it seems to be a better design, but it also is a pain to refactor if serialNumber is a String everywhere else.

    Read the article

  • Structuring cascading properties - parent only or parent + entire child graph?

    - by SB2055
    I have a Folder entity that can be Moderated by users. Folders can contain other folders. So I may have a structure like this: Folder 1 Folder 2 Folder 3 Folder 4 I have to decide how to implement Moderation for this entity. I've come up with two options: Option 1 When the user is given moderation privileges to Folder 1, define a moderator relationship between Folder 1 and User 1. No other relationships are added to the db. To determine if the user can moderate Folder 3, I check and see if User 1 is the moderator of any parent folders. This seems to alleviate some of the complexity of handling updates / moved entities / additions under Folder 1 after the relationship has been defined, and reverting the relationship means I only have to deal with one entity. Option 2 When the user is given moderation privileges to Folder 1, define a new relationship between User 1 and Folder 1, and all child entities down to the grandest of grandchildren when the relationship is created, and if it's ever removed, iterate back down the graph to remove the relationship. If I add something under Folder 2 after this relationship has been made, I just copy all Moderators into the new Entity. But when I need to show only the top-level Folders that a user is Moderating, I need to query all folders that have a parent folder that the user does not moderate, as opposed to option 1, where I just query any items that the user is moderating. I think it comes down to determining if users will be querying for all parent items more than they'll be querying child items... if so, then option 1 seems better. But I'm not sure. Is either approach better than the other? Why? Or is there another approach that's better than both? I'm using Entity Framework in case it matters.

    Read the article

  • Javascript is not loading

    - by Oden
    Hey, I've got a problem with JavaScript under Ubuntu, that drives me crazy. I'm using Gedit for my web sites since I'm an Ubuntu user. When I start a new website I create (usually with the gnome terminal) folder structure, and I copy the files I need into them. The next step is creating an index.html where I build the design and basic JavaScript functionality. JavaScript is stored in a sub-folder of the project and when i try to load one using the tag in the header, my whole page body disappears. If the source contains a script tag with its own body, and its not the first its code wont run. I've tried to solve the problem by setting chmod to 777 with sudo chmod -R 777 . but nothing changed. CSS is loading correctly, but JS isn't. I'm using the newest version of apache, no mod_rewrite stuff, but i get the same problem when I run the html from file (file:///...) Do anyone know how to solve this problem?

    Read the article

  • Deploying my first website!

    - by test
    I have built a data driven website - an asp.net website, using the entity framework. In my solution I have 4 projects - the web application PresentationLayer, and 3 class libraries - Data Layer, Business and Common Layer. In one of these libraries, Common Layer I have my Model (MyModel.edmx). I have always tested my application on Cassini - Asp.Net Development Server. I have never touched IIS in my life. I bought a domain and hosting on go daddy. My logic tells me to grab my four folders (1 for each layer) and simply move them to the root folder. But I know I'm wrong since then the home page would be mywebsite.org/presentationlayer/default.aspx and second of all I start getting a bunch of errors where files do not load or they are not found. I also know that I need to manage the web.config but I don't have any experience where to start. I'm not sure if this is a problem but I also have a web service included in my presentation layer.

    Read the article

  • What is the recommended MongoDB schema for this quiz-engine scenario?

    - by hughesdan
    I'm working on a quiz engine for learning a foreign language. The engine shows users four images simultaneously and then plays an audio file. The user has to match the audio to the correct image. Below is my MongoDB document structure. Each document consists of an image file reference and an array of references to audio files that match that image. To generate a quiz instance I select four documents at random, show the images and then play one audio file from the four documents at random. The next step in my application development is to decide on the best document schema for storing user guesses. There are several requirements to consider: I need to be able to report statistics at a user level. For example, total correct answers, total guesses, mean accuracy, etc) I need to be able to query images based on the user's learning progress. For example, select 4 documents where guess count is 10 and accuracy is <=0.50. The schema needs to be optimized for fast quiz generation. The schema must not cause future scaling issues vis a vis document size. Assume 1mm users who make an average of 1000 guesses. Given all of this as background information, what would be the recommended schema? For example, would you store each guess in the Image document or perhaps in a User document (not shown) or a new document collection created for logging guesses? Would you recommend logging the raw guess data or would you pre-compute statistics by incrementing counters within the relevant document? Schema for Image Collection: _id "505bcc7a45c978be24000005" date 2012-09-21 02:10:02 UTC imageFileName "BD3E134A-C7B3-4405-9004-ED573DF477FE-29879-0000395CF1091601" random 0.26997075392864645 user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioFiles [ 0 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "English" date 2012-09-22 01:15:04 UTC } 1 { audioFileName "C3669719-9F0A-4EB5-A791-2C00486665ED-30305-000039A3FDA7DCD2" user "2A8761E4-C13A-470E-A759-91432D61B6AF-25982-0000352D853511AF" audioLanguage "Spanish" date 2012-09-22 01:17:04 UTC } ]

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • Dynamic quadrees

    - by paul424
    recently I come out writing Quadtree for creatures culling in Opendungeons game. Thing is those are moving points and bounding hierarchy will quickly get lost if the quadtree is not rebuild very often. I have several variants, first is to upgrade the leaf position , every time creature move is requested. ( note if I would need collision detection anyway, so this might be necessery anyway). Second would be making leafs enough large , that the creature would sure stay inside it's bounding box ( due to its speed limit). The partition of a plane in quadtree is always fixed ( modulo the hierarchical unions of some parts) . For creatures close to the center of the plane , there would be no way of keeping it but inside one big leaf, besides this brokes the invariant that each point can be put into any small area as desired. So on the second thought could I use several quadrees ? Each would have its "coordinate axis XY" somwhere shifted ? Before I start playing with this maybe some other space diving structure would suit me better, unfortunetly wiki does not compare it's execution time : http://en.wikipedia.org/wiki/Grid_%28spatial_index%29#See_also

    Read the article

  • Dim (NEARLY blank) laptop screen, secondary screen works - why?

    - by LIttle Ancient Forest Kami
    My laptop screen is (almost) black while my secondary screen is fine. I believe it to be backlight / brightness related. Problem description it starts when I start the laptop system loads and works fine, just screen has problems I can see the screen though very faintly / dimly - it's hard to see anything which ain't very white e.g. starting screen has big Thinkpad logo in white, large font - I can see it, though very dimly second screen works very well Official backligtht debugging: using acpi setting as prescribed there for Thinkpads didn't help I can see an entry in /sys/class/backlight/ and it changes when I press hotkeys for brightness (current backlight power for instance goes up or down) acpi-off didn't helpm neither did acpi_backlight=vendor Hardware data Laptop is Thinkpad Edge with glossy screen. 4 processors, 2 cores, exemplary CPU data from cat /proc/cpuinfo reports Genuine Intel i5 (M 480 @ 2.67GHz). OS is Ubuntu Lucid, 10.04 LTS, 64-bit, with Linux generic kernel (2.6.32-44) and GNOME 2.32.2 (though I doubt there lies the problem). $ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc M92 [Mobility Radeon HD 4500 Series] $ lshw -C display *-display description: VGA compatible controller product: M92 [Mobility Radeon HD 4500 Series] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:33 memory:c0000000-dfffffff(prefetchable) ioport:2000(size=256) memory:f0300000-f030ffff memory:f0320000-f033ffff(prefetchable) Driver I was NOT running any proprietary drivers, just checked with "Hardware drivers". There is one for ATI that is suggested there, though I didn't need it so far. UPDATE: changing the driver to proprietary one (ATI/AMD FGLRX) didn't help. Tried and failed Resetting / running on power or battery / charging / getting rid of static electricity / warming up *doesn't help* This is NOT a blank-screen problem, at least it isn't following official Ubuntu black-screen diagnostics - I can see my screen, though barely. What I will try next: - check last updates I've made - IIRC I am running on nomodeset already, but will verify this Any ideas how to proceed best? What is most probable cause?

    Read the article

  • Logging Output of Azure Startup Tasks to the Event Log

    - by Your DisplayName here!
    This can come in handy when troubleshooting: using System; using System.Diagnostics; using System.Text;   namespace Thinktecture.Azure {     class Program     {         static EventLog _eventLog = new EventLog("Application", ".", "StartupTaskShell");         static StringBuilder _out = new StringBuilder(64);         static StringBuilder _err = new StringBuilder(64);           static int Main(string[] args)         {             if (args.Length != 1)             {                 Console.WriteLine("Invalid arguments: " + String.Join(", ", args));                 _eventLog.WriteEntry("Invalid arguments: " + String.Join(", ", args));                                 return -1;             }               var task = args[0];               ProcessStartInfo info = new ProcessStartInfo()             {                 FileName = task,                 WorkingDirectory = Environment.CurrentDirectory,                 UseShellExecute = false,                 ErrorDialog = false,                 CreateNoWindow = true,                 RedirectStandardOutput = true,                 RedirectStandardError = true             };               var process = new Process();             process.StartInfo = info;               process.OutputDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _out.AppendLine(e.Data);                     }                 };             process.ErrorDataReceived += (s, e) =>                 {                     if (e.Data != null)                     {                         _err.AppendLine(e.Data);                     }                 };               process.Start();             process.BeginOutputReadLine();             process.BeginErrorReadLine();             process.WaitForExit();               var outString = _out.ToString();             var errString = _err.ToString();               if (!string.IsNullOrWhiteSpace(outString))             {                 outString = String.Format("Standard Out for {0}\n\n{1}", task, outString);                 _eventLog.WriteEntry(outString, EventLogEntryType.Information);             }               if (!string.IsNullOrWhiteSpace(errString))             {                 errString = String.Format("Standard Err for {0}\n\n{1}", task, errString);                 _eventLog.WriteEntry(errString, EventLogEntryType.Error);             }               return 0;         }     } } You then wrap your startup tasks with the StartupTaskShell and you’ll be able to see stdout and stderr in the application event log.

    Read the article

  • What is the recommended way to empty a SSD?

    - by Lekensteyn
    I've just received my new SSD since the old one died. This Intel 320 SSD supports TRIM. For testing purposes, my dealer put malware, err, Windows on it. I want to get rid of it and install Kubuntu on it. It does not have to be a "secure wipe", I just need the empty the disk in the mosy healthy way. I believe that dd if=/dev/zero of=/dev/sda just fills the blocks with zeroes and thereby taking another write (correct me if I'm wrong). I've seen the answer How to enable TRIM, but it looks like it's suited for clearing empty blocks, not wiping the disk. hdparm seems to be the program to do it, but I'm not sure if it clears the disk OR cleans empty blocks. From its manual page: --trim-sector-ranges For Solid State Drives (SSDs). EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! Tells the drive firmware to discard unneeded data sectors, destroying any data that may have been present within them. This makes those sectors available for immediate use by the firmware's garbage collection mechanism, to improve scheduling for wear-leveling of the flash media. This option expects one or more sector range pairs immediately after the option: an LBA starting address, a colon, and a sector count, with no intervening spaces. EXCEPTIONALLY DANGEROUS. DO NOT USE THIS OPTION!! E.g. hdparm --trim-sector-ranges 1000:4 7894:16 /dev/sdz How can I make all blocks appear as empty using TRIM?

    Read the article

  • Did C++11 address concerns passing std lib objects between dynamic/shared library boundaries? (ie dlls and so)?

    - by Doug T.
    One of my major complaints about C++ is how hard in practice it is to pass std library objects outside of dynamic library (ie dll/so) boundaries. The std library is often header-only. Which is great for doing some awesome optimizations. However, for dll's, they are often built with different compiler settings that may impact the internal structure/code of a std library containers. For example, in MSVC one dll may build with iterator debugging on while another builds with it off. These two dlls may run into issues passing std containers around. If I expose std::string in my interface, I can't guarantee the code the client is using for std::string is an exact match of my library's std::string. This leads to hard to debug problems, headaches, etc. You either rigidly control the compiler settings in your organization to prevent these issues or you use a simpler C interface that won't have these problems. Or specify to your clients the expected compiler settings they should use (which sucks if another library specifies other compiler settings). My question is whether or not C++11 tried to do anything to solve these issues?

    Read the article

  • Working with the ADF DVT Map Component

    - by Shay Shmeltzer
    The map component provided by the ADF Faces DVT set of components is one that we are always ending up using in key demos - simply because it is so nice looking, but also because it is quite simple to use. So in case you need to show some geographical data, or if you just want to impress your manager, here is a little video that shows you how to create two types of maps. The first one is a color themed map - where you show different states with different colors based on the value of some data point there. The other is a point theme - basically showing specific locations on the map. For both cases I'm using the Oracle provided mapviewer instance at http://elocation.oracle.com/mapviewer. You can find more information about using the map component in the Web User Interface Developer's Guide here and in the tag doc and components demo. For the first map the query I'm using (on the HR demo schema in the Oracle DB) is: SELECT     COUNT(EMPLOYEES.EMPLOYEE_ID) , Department_name , STATE_PROVINCE FROM     EMPLOYEES,     DEPARTMENTS,     LOCATIONS WHERE employees.department_id=departments.department_idand Departments.location_id=locations.location_idGROUP BY Department_name,    LOCATIONS.STATE_PROVINCE

    Read the article

  • vim + Ruby on Rails: how do you bounce among those 4-5 files you're currently working on?

    - by glitch
    I'm just starting to get familiar with vim, and I'd like to use it as my primary Rails development tool. As a Visual Studio and RubyMine user, I find a lot of stuff to be missing from the barebones vim installation, and therefore I went ahead and attempted to soup it up with plugins such as: rails.vim tcomment ruby-vim NERDtree and a couple of others. The issue is that I still don't quite get the average work-flow of using vim as one's Rails IDE. In RubyMine (again, similarly to Visual Studio) I have a series of tabs always open, containing the main files I'm switching among, and I additionally use NERDtree to open files from the folder structure. I tried opening them as new tabs, but the tab system in vim is just a lot more awkward than that in real IDEs. (I haven't seen vim pros in action, but I imagine that they'd not be relying on tabs, but using numerous splits instead, keeping at least a couple of files per split and switching between them with CTRL + ^. Is that the case?) So, at the end of the day, how do I really squeeze the most from vim if I want to be able to quickly access several files at once? Thank you!

    Read the article

  • Anemic Domain Model, Business Logic and DataMapper (PHP)

    - by sunwukung
    I've implemented a rudimentary ORM layer based on DataMapper (I don't want to use a full blown ORM like Propel/Doctrine - for anything beyond simple fetch/save ops I prefer to access the data directly layer using a SQL abstraction layer). Following the DataMapper pattern, I've endeavoured to keep all persistence operations in the Mapper - including the location of related entities. My Entities have access to their Mapper, although I try not to call Mapper logic from the Entity interface (although this would be simple enough). The result is: // get a mapper and produce an entity $ProductMapper = $di->get('product_mapper'); $Product = $ProductMapper->find('[email protected]','email'); //.. mutaute some values.. save $ProductMapper->save($Product) // uses __get to trigger relation acquisition $Manufacturer = $Product->manufacturer; I've read some articles regarding the concept of an Anemic Domain model, i.e. a Model that does not contain any "business logic". When demonstrating the sort of business logic ideally suited to a Domain Model, however, acquiring related data items is a common example. Therefore I wanted to ask this question: Is persistence logic appropriate in Domain Model objects?

    Read the article

< Previous Page | 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525  | Next Page >