Search Results

Search found 20890 results on 836 pages for 'self reference'.

Page 494/836 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • How to structure my GUI agnostic project?

    - by Nezreli
    I have a project which loads from database a XML file which defines a form for some user. XML is transformed into a collection of objects whose classes derive from single parent. Something like Control - EditControl - TextBox Control - ContainterControl - Panel Those classes are responsible for creation of GUI controls for three different enviroments: WinForms, DevExpress XtraReports and WebForms. All three frameworks share mostly the same control tree and have a common single parent (Windows.Forms.Control, XrControl and WebControl). So, how to do it? Solution a) Control class has abstract methods Control CreateWinControl(); XrControl CreateXtraControl(); WebControl CreateWebControl(); This could work but the project has to reference all three frameworks and the classes are going to be fat with methods which would support all three implementations. Solution b) Each framework implementation is done in separate projects and have the exact class tree like the Core project. All three implementations are connected using a interface to the Core class. This seems clean but I'm having a hard time wrapping my head around it. Does anyone have a simpler solution or a suggestion how should I approach this task?

    Read the article

  • create a .deb Package from scripts or binaries

    - by tdeutsch
    I searched for a simple way to create .deb Packages for things which have no source code to compile (configs, shellscripts, proprietary software). This was quite a problem because most of the package tutorials are assuming you have a source tarball you want to compile. Then I've found this short tutorial (german). Afterwards, I created a small script to create a simple repository. Like this: rm /export/my-repository/repository/* cd /home/tdeutsch/deb-pkg for i in $(ls | grep my); do dpkg -b ./$i /export/my-repository/repository/$i.deb; done cd /export/avanon-repository/repository gpg --armor --export "My Package Signing Key" > PublicKey apt-ftparchive packages ./ | gzip > Packages.gz apt-ftparchive packages ./ > Packages apt-ftparchive release ./ > /tmp/Release.tmp; mv /tmp/Release.tmp Release gpg --output Release.gpg -ba Release I added the key to the apt keyring and included the source like this: deb http://my.default.com/my-repository/ ./ It looks like the repo itself is working well (I ran into some problems, to fix them I needed to add the Packages twice and make the temp-file workaround for the Release file). I also put some downloaded .deb into the repo, it looks like they are also working without problems. But my self created packages didn't... Wenn i do sudo apt-get update, they are causing errors like this: E: Problem parsing dependency Depends E: Error occurred while processing my-printerconf (NewVersion2) E: Problem with MergeList /var/lib/apt/lists/my.default.com_my-repository_._Packages E: The package lists or status file could not be parsed or opened. Has anyone an idea what I did wrong?

    Read the article

  • Is there such a thing as "closure" with software work?

    - by Bobby Tables
    I burned out last year (after a decade of fulltime programming jobs) and am on a sabbatical now. With all the self-examination I've started to figure out some of the root causes of my burnout, and one of the major ones is basically this: there was never any real closure in any of the work I've ever done. It was always a case of getting into an open-ended support/maintenance grind and going stale. When I first entered the industry, I had this image of programming work being very project-based. And I expected projects to have a start, beginning, and END. And then you move on and start on something totally new and fresh. Basically I never expected that a lot (most) of software work involves supporting and maintaining the same code base for open-ended long periods of time - years and even decades. That, combined with generally having itchy feet makes me think that burnout is inevitable for me, after 2-3 years, in ANY fulltime software job. All this sounds like I probably should have been a contractor instead of a fulltimer. But when I discuss this with people, a lot of them say that even THEN you can't really escape having to go back and maintain/support the stuff you worked on, over and over (eg. Coming back on support contracts, for example). The nature of software work is simply like that. There is no project closure, unlike in many other engineering fields. So my question is - Is there ANY programming work out there which is based on short to mid term projects/stints and then moving on cleanly? And is there any particular industry domain or specialization where this kind of project work is typical?

    Read the article

  • Logarithmic spacing of FFT subbands

    - by Mykel Stone
    I'm trying to do the examples within the GameDev.net Beat Detection article ( http://archive.gamedev.net/archive/reference/programming/features/beatdetection/index.html ) I have no issue with performing a FFT and getting the frequency data and doing most of the article. I'm running into trouble though in the section 2.B, Enhancements and beat decision factors. in this section the author gives 3 equations numbered R10-R12 to be used to determine how many bins go into each subband: R10 - Linear increase of the width of the subband with its index R11 - We can choose for example the width of the first subband R12 - The sum of all the widths must not exceed 1024 He says the following in the article: "Once you have equations (R11) and (R12) it is fairly easy to extract 'a' and 'b', and thus to find the law of the 'wi'. This calculus of 'a' and 'b' must be made manually and 'a' and 'b' defined as constants in the source; indeed they do not vary during the song." However, I cannot seem to understand how these values are calculated...I'm probably missing something simple, but learning fourier analysis in a couple of weeks has left me Decimated-in-Mind and I cannot seem to see it.

    Read the article

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?

    - by Denja
    Hi Linux Community, I find my self struggling with the ever slow and buggy windoze OS once again. It's Time to change with the Ubuntu 10.10 64bit as a really faster Operating System. My Hard Disk laptop as a RECOVERY and HP_TOOLS partition they are both Primary. I Have the System Recovery DVD for Windows 64bit should anything happen. Here's the layout I used with windows before: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make based on your answers * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows DATA partition (user files) NTFS - 120GB(Primary)(sda2);wanna share with Linux * Linux root Ext4 - 100GB (Primary)(sda3) (Ubuntu 10.10 64bit) * Linux swap swap- RAM size, 3GB (sda4) * Linux root Ext3- 15,9GB (Extended)(sda5) (OpenSuse or Puppy) Here is my New Ubuntu 10.10 64bit layout in use now: * SYSTEM partition NTFS 199MB (Primary) (sda1) **Partition 1 does not end on cylinder boundary.(?)** * (C:) Windows 7 system partition NTFS - 90GB (Primary) (sda2) * (D:) Windows 7 RECOVERY partition NTFS - 12,90GB (Primary) (sda3) * Linux system partition EXTENDED - 195GB (Logical) * Linux root Ext4- 10GB (Extended) (sda5) * Linux home Ext3- 185GB (Extended) (sda6) I didn't know if I could wipe all previous partitions when i installed Ubuntu because of the RECOVERY partition so I just made the space for my extended partition by deleting the HP_TOOLS (Fat32). By doing this I managed to make and successfully install Ubuntu 64 but I couldn't actually make the partition for the swap or a third Linux OS. Question 1: What is the right way to Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning for Laptop OEM?? Thank you in advance for your advises and suggestions and Happy New Year to All!!

    Read the article

  • EMEA Analytics & Data Integration Oracle Partner Forum

    - by Mike.Hallett(at)Oracle-BI&EPM
    MONDAY 12TH NOVEMBER, 2012 IN LONDON (UK) For Oracle Partners across Europe, Middle East and Africa: come to hear the latest news from Oracle OpenWorld about Oracle BI & Data Integration, and propel your business growth as an Oracle partner. This event should appeal to BI or Data Integration specialised partners, Executives, Sales, Pre-sales and Solution architects: with a choice of participation in the plenary day and then a set of special interest (technical) sessions. The follow on breakout sessions from the 13th November provide deeper dives and technical training for those of you who wish to stay for more detailed and hands-on workshops. Keynote: Andrew Sutherland, SVP Oracle Technology Hot agenda items will include: The Fusion Middleware Stack: Engineered to work together A complete Analytics and Data Integration Solution Architecture: Big Data and Little Data combined In-Memory Analytics for Extreme Insight Latest Product Development Roadmap for Data Integration and Analytics Venue:  Oracles London CITY Moorgate Offices Places are limited, Register from this Link {see Register button at bottom right of page}. Note: Registration for the conference and the deeper dives and technical training is free of charge to OPN member Partners, but you will be responsible for your own travel and hotel expenses. Event Schedule During this event you can learn about partner success stories, participate in an array of break-out sessions, exchange information with other partners and enjoy a vibrant panel discussion. Nov. 12th  : Day 1 Main Plenary Session : Full day, starting 10.30 am.     Oracle Hosted Dinner in the Evening Nov. 13th  onwards Architecture Masterclass : IM Reference Architecture – Big Data and Little Data combined (1 day) BI-Apps Bootcamp  (4-days) Oracle GoldenGate workshop (1 day) Oracle Data Integrator and Oracle Enterprise Data Quality workshop (1 day)   For further information and detail download the Agenda (pdf) or contact Michael Hallett at [email protected].

    Read the article

  • Transitioning from a mechanical engineer to software developer. What path to take?

    - by Patrick
    Hi all, I am 24 years old and have a BS in mechanical engineering from Mizzou. I have been working as a mechanical engineer in a big corporation for about 2 years since graduation. In this time, I have decided that I'm just not that interested in mechanical engineering topics, and I'm much more excited about web technology, website design, etc. I do a very small amount of freelance web design just to get experience, but I have no formal training yet. Ultimately I could see myself being very excited about working for a small startup in web technology, or creating and selling my own programs in more of an entrepreneurial role. What path do you recommend for transitioning into this field? Go back to school and get a BS in CS? MS in CS? Tech school for classes? Self study? I'm feeling overwhelmed by the options available, but the one thing I know for sure is that I'm excited to make a change. Thanks!

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

  • Comb Over

    - by Tim Dexter
    Being some what follicly challenged, and to my wife's utter relief, the comb over is not something I have ever considered. The title is a tenuous reference to a formatting feature that Adobe offers in their PDF documents. The comb provides the ability to equally space a string of characters on a pre-defined form layout so that it fits neatly in the area. See the numbers above are being spaced correctly. Its not a function of the font but a property of the form field. For the first time, in a long time I had the chance to build a PDF template today to help out a colleague. I spotted the property and thought, hey, lets give it a whirl and see in Publisher supports it? Low and behold, Publisher handles the comb spacing in its PDF outputs. Exciting eh? OK, maybe not that exciting but I was very pleasantly surprise to see it working. I am reliably informed, by Leslie, BIP Evangelist and Tech Writer that, this feature was introduced from version 10.1.3.4.2 onwards. Official docs and no mention of comb overs here. Happy Combing!

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • Oracle Flashback Technology - Webcast 9th June 2010

    - by Alex Blyth
    Hi All Here are the details for webcast on Oracle Flashback Technologies on Wednesday (9th June 2010) beginning at 1.30pm (Sydney, Australia Time). The Oracle Database architecture leverages the unique technological advances in the area of database recovery due to human errors. Oracle Flashback Technology provides a set of new features to view and rewind data back and forth in time. The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past! Oracle9i introduced Flashback Query to provide a simple, powerful and completely non-disruptive mechanism for recovering from human errors. It allows users to view the state of data at a point in time in the past without requiring any structural changes to the database. Oracle Database 10g extended the Flashback Technology to provide fast and easy recovery at the database, table, row, and transaction level. Flashback Technology revolutionizes recovery by operating just on the changed data. The time it takes to recover the error is now equal to the same amount of time it took to make the mistake. Oracle 10g Flashback Technologies includes Flashback Database, Flashback Table, Flashback Drop, Flashback Versions Query, and Flashback Transaction Query. Flashback technology can just as easily be utilized for non-repair purposes, such as historical auditing with Flashback Query and undoing test changes with Flashback Database. Oracle Database 11g introduces an innovative method to manage and query long-term historical data with Flashback Data Archive. This release also provides an easy, one-step transaction backout operation, with the new Flashback Transaction capability. Webcast is at http://strtc.oracle.com (IE6, 7 & 8 supported only)Conference ID for the webcast is 6690835Conference Key: flashbackEnrollment is required. Please click here to enroll.Please use your real name in the name field (just makes it easier for us to help you out if we can't answer your questions on the call) Audio details: NZ Toll Free - 0800 888 157 orAU Toll Free - 1800420354 (or +61 2 8064 0613)Meeting ID: 7914841Meeting Passcode: 09062010 Talk to you all Wednesday 9th June Alex

    Read the article

  • ASP.NET website deployment [on hold]

    - by Rei Brazilva
    I am getting my hands wet with ASP and I have been following the tutorials. I deployed the site and in Azure and it worked great. Today I started actually designing the site. And when I published, it looks as if it doesn't read any of the files I just updated, added, and modified. It works on my localhost, but not in the Azure. I thought when you publish, everything goes up, including the new files. I don't have enough reputation to add a picture so, you'll forgive me. SO, basically, how do I get my entire site uploaded? In case anyone does stop by, I was able to pull this out just recently: CA0058 Error Running Code Analysis CA0058 : The referenced assembly 'DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' could not be found. This assembly is required for analysis and was referenced by: C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\WebsiteManager\bin\WebsiteManager.dll, C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\packages\Microsoft.AspNet.WebPages.OAuth.2.0.20710.0\lib\net40\Microsoft.Web.WebPages.OAuth.dll. [Errors and Warnings] (Global) CA0001 Error Running Code Analysis CA0001 : The following error was encountered while reading module 'Microsoft.Web.WebPages.OAuth': Assembly reference cannot be resolved: DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246. [Errors and Warnings] (Global) Could this have something to do with the problem?

    Read the article

  • In MVC, why can't a model create a view?

    - by MUY Belgium
    I have a web application written in Perl with a controller, some "views" and some "Models". Each "Model" is corresponding to one "View". The controller (one file) creates an Model object corresponding to each view (view is a CGI argument) then retrieve the view from the module it has just created. Indeed, this should be bad thing but can you argue a bit more about it. My first idea was that since the object "Model" depends upon the "view", then the "model" is actually a view. But also the fact that ALL the cgi parameters are passed to the Model causes the "Model" to become not truelly a view but to loose all interest, since it is only related to the current implementation of the web apps. On other words, that the "Model" keep model but loose its "comprehensiveness" ("Model" is not easily understandable). I'm am quite new in project analysis, so please do not be too harsh. Why is this bad? I have made a prototype with the main structures I have understood of this web application, made as short as possible. #Model.pm package Model; import { # this requires an attribute called "view" # and this require an argument which is the cgi params } ... #View1.pm package View1; ... #Model1.pm package ModelView1 ; base Model; use View1; sub new { my $class = shift; my $arg = shift; Model::DoSomething($arg); $self->view = new View1($arg); ... } #controller.cgi my $model = 0; ... $model = new Model1( cgi_param => params() ); #there is severall models here ... print $model->get_view()->get_html();

    Read the article

  • Best practice for designing a risk-style board game

    - by jyanks
    I'm just trying to figure out how to set up the code for a game like risk... I would like it to be extensible, so that I can have multiple maps (ie- World, North America, Eurasia, Africa) so hardcoding in the map doesn't seem to make a whole lot of sense I'm a bit confused on how/where items should be stored/accessed. Here are the objects I see the game theoretically using: -Countries/Territories -Cities (Can be contained within territories) -Capitols -Connections -Continents -Map -Troops At the moment, I feel like: -A map should have a list of continents and countries. The continents would be more of a 'logical' thing where the continents would just be lists of countries that are checked for bonuses at the start of turns -Countries should have a list of countries that they're connected to for the connections What I can't figure out is: Where do I store the troops? Do I have an object for every single troop or do I just store the number of troops on a country object as an integer? What about capitols and cities? Do those just have a reference to the country they reside in? Is there anything I'm not seeing here that's going to screw me over in the long run with the way that I'm thinking about things now? Any advice would be appreciated.

    Read the article

  • Going on 15 months for me...

    - by Ratman21
    About 5 face to face interviews, 4 telephone ones and except for the two weeks Census Job. But, after 15 months looking for work, I am still with out a JOB. What is wrong here or with me? Let’s see, hard worker (check), self motivated to do well on a Job (Check), Certified CompTIA A+, Security+  and Network+ Technician (Check), 20 + years experience in “IT” (CHECK), in good health, in 20 years of work only 15 days off due to health issues (Check), 18 years experience as technical Help Desk support (Check), can still work better than younger personal (Check), Strong trouble shooting skills for software, computer hard ware and circuit issues (Check) and Multiple software languages (Hey I have done some programming) Check. Hmm I don’t see any problem with me (of course I could have missed something, please let me know if you see what I am missing).    Now as to what have I been up to since I last blogged. The same things of course, Job hunting, job hunting and study.   I have set up sim of my home LAN and will be adding a wireless print server to the sim and in real life, soon.  I was able to pull up and copy the examples of Cisco router commands that I had on my old lap top, to my newer PC. Every time I used a new command while working the NOC on my last job.   I would cut and past a copy of the command on the router (and what it did) I was working on.  Along with notes on the problem and commands use for same router. I used these to make documentation for on how to handle these types of issues, for the other Operation Techs. My old notes are helping me in studying for the CCENT test.    As to Love Dare, I think it will take more like 40 weeks, than the 40 days of the book. Yes I am making progress, slow but, it is progress. I will have more on that in my next blog.

    Read the article

  • Essbase - FormatString

    - by THE
    A look at the documentation for "Typed Measures" shows:"Using format strings, you can format the values (cell contents) of Essbase database members in numeric type measures so that they appear, for query purposes, as text, dates, or other types of predefined values. The resultant display value is the cell’s formatted value (FORMATTED_VALUE property in MDX). The underlying real value is numeric, and this value is unaffected by the associated formatted value."To actually switch ON the use of typed measures in general, you need to navigate to the outline properties: open outline select properties change "Typed Measures enable" to TRUE (click to enlarge) As an example, I created two additional members in the ASOSamp outline. - A member "delta Price" in the Measures (Accounts) Dimension with the Formula: ([Original Price],[Curr_year])-([Original Price],[Prev_year])This is equivalent to the Variance Formula used in the "Years" Dimension. - A member "Var_Quickview" in the "Years" Dimension with the same formula as the "Variance" Member.This will be used to simply display a second cell with the same underlying value as Variance - but formatted using Format String hence enabling BOTH in the same report. (click to enlarge) In the outline you now select the member you want the Format String associated with and change the "associated Format String" in the Member Properties.As you can see in this example an IIF statement reading:MdxFormat(IIF(CellValue()< 0,"Negative","Positive" ) ) has been chosen for both new members.After applying the Format String changes and running a report via SmartView, the result is: (click to enlarge) reference: Essbase Database Admin Guide ,Chapter 12 "Working with Typed Measures "

    Read the article

  • Topeka Dot Net User Group (DNUG) Meeting &ndash; April 6, 2010

    - by Robz / Fervent Coder
    Topeka DNUG is free for anyone to attend! Mark your calendars now! SPEAKER: Troy Tuttle is a self-described pragmatic agilist, and Kanban practitioner, with more than a decade of experience in delivering software in the finance and health industries and as a consultant. He advocates teams improve their performance through pursuit of better practices like continuous integration and automated testing. Troy is the founder of the Kansas City Limited WIP Society and is a speaker at local area groups on team related topics. He currently works as a Project Lead Consultant with AdventureTech Group of Kansas City, KS. TOPIC: Why Kanban? Kanban is receiving a large amount of attention recently. What does it offer compared to other approaches? Answering that question may require you to hit the “reset” button on previously held biases and assumptions. Kanban blends Lean thought with ideas from first generation agile methodologies. To get started with Kanban, we will examine what steps are necessary to establish a transparent, work-limited, pull system. We will highlight the perils of allowing too much work-in-progress and how it affects development performance. Once established, Kanban teams need only a few metrics and tools to monitor their performance and improvement. WHERE: Federal Home Loan Bank Topeka on the Security Benefit Campus – Directions? WHEN: 11:30 AM - 1:00 PM on April 6th, 2010 REGISTER: http://topekadotnet.wufoo.com/forms/topeka-dnug-meeting-attendance/ ADDITIONAL INFO: As always, please sign in and out of FHLBank to help them with their accountability. Please park in the visitors section at the front of the building when you arrive. If  there are no spots in visitors you may park in the overflow lot at the far east end of the facility.  Lunch will be provided and we will have some great door prizes!

    Read the article

  • NINE Great Reasons to Attend the GlassFish Community Event at JavaOne 2012

    - by Alexandra Huff
    Are you coming to the annual GlassFish Community Event at JavaOne this year? Here are nine great reasons not to miss it! Great company Meet and mingle with community leaders and luminaries, the GlassFish engineering team, and Oracle executives! Learn from others How are your peers using GlassFish in creative ways? A few community members will share their challenges and creative solutions. Ask tough questions Meet Oracle GlassFish and Middleware executives; the panel discussion will be moderated by one of our stellar community leaders! Shirts! Be sure to get this year's GlassFish T-shirt, designed by and voted on by YOU, our community members! Don't miss it - they go fast. Share your story Give us a two minute update on why you love GlassFish and how you are using it! We will immortalize you in a very brief video and post it to our GlassFish Stories page! Find out... about the new book, hot off the press, authored by our very own Arun Gupta: "Java EE 6 Pocket Guide: A Quick Reference for Simplified Enterprise Java Development" If you share... your story, you will win a copy of Arun's new book as our thank you gift! Suggest... some ideas on how to make GlassFish even better! Have fun Lively discussion, news and updates, excellent company -- this is THE place to be on Sunday at JavaOne! Convinced? Excellent! Then please register here! A JavaOne Pass is required to enter Moscone Center. All passes accepted, including Discover, Exhibitor, Press, Blogger, etc. Agenda 11:00 - 11:05: Introduction 11:05 - 11:30: Roadmap and Community Updates 11:30 - 12:15: Q&A with Executive Speaker Panel from Oracle and the GlassFish Team 12:15 - 01:00: Customer Testimonials Location: Moscone West, Room 2005 Add sessions UGF10359 and UGF10360 to Schedule Builder

    Read the article

  • How to experience gradual improvement of knowledge while a newbie does .NET maintenance programming?

    - by amir
    I started my career as a software developer about 6 months ago. This is my first job, and I am the only developer in this company. I gained .NET knowledge by self study and also by doing some university projects. Our systems have old foundations based on an earlier version of .NET, and I'm starting to feel that I am not improving since I am a maintenance programmer here. Everything is old and my manager is not really taking any chances on gradually improving the software. What is your opinion? What should I do? I am newbie and also work hard to find my way through. There is no other developer, not even a senior one to help me here. I need your advice on my situation. And one last thing, can I get a new job with doing maintenance programming? I mean don't managers say that you do not have the experience of developing a new software from scratch? I feel redundant, what do I do?

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • What&rsquo;s in your wallet, er&hellip;Inbox?

    - by johndoucette
    Since my first UUCP operation in UNIX to deliver and receive an email, I have always been challenged to find the ultimate email organizer. About a year ago, I switched to a very simple process of managing email and have found the ultimate in organization. On the craziest of days with 250+ emails, I keep my inbox empty. Here is how I do it; First, start with the following folders in your mailbox; Inbox    Archive    FollowUp    Hold Of course, all inbound emails will start in the Inbox. As you work throughout the day, follow these steps to keep your inbox empty; Read the email. Are you responsible for any action? If you are and can do it immediately, then do it. If you need to do it later, move the email to the “FollowUp” folder If you are not responsible for any action, move it to the archive folder. Use Outlook’s search to find them when you need them. If you will need to reference the email later in the week or for a short term (week or two), then move the email to the “Hold” folder As your day progresses, frequently review the FollowUp folder and accomplish the task *Notes: If I am waiting for someone to do something for me, I keep it in the FollowUp folder. As I review the folder, I am constantly reminded that there is something I am waiting on – and can send a simple reminder by forwarding the original email. I sometimes send myself a “todo” email and park it in the FollowUp folder I like to know how many emails are in the folders so I set the “Show total number of items” property on the folder to show the amount of emails.

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >