Search Results

Search found 22301 results on 893 pages for 'software sources'.

Page 666/893 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • How do I stop color changes when quitting vi from a terminal emulator?

    - by Michael Warhol
    I have a problem with colors when using vi under Ubuntu 12.04. I'm connecting to my Ubuntu server from a PC, using PowerTerm terminal emulation software. I have PowerTerm set up to display black text on a grey background. When I connect to the Ubuntu box, the screen is fine. When I open a file with vi, the screen is fine. The text is black on a gray background, which is normal for my PowerTerm setup. However, if the file is less than a full screen long, the remainder of the screen is a black background. When I quit vi, the entire background turns black, and the text becomes white. I have to do a Terminal Reset to restore my normal text and background colors. What I want is for there to be no change at all when I use vi. The text should be black and the background grey. I have another server loaded with RedHat 9, and that acts normally; colors don’t change when using vi. Here is my .vimrc file: set compatible syntax off let g:loaded_matchparen=1 set nocp set noincsearch set nohlsearch set noshowmatch set bg=dark I've tried set bg=dark and set bg=light. It makes no difference. Is there some other set command that would clear this up for me, or some TERM setting (my TERM is set to linux)?

    Read the article

  • How to take a CSS animation from a browser, and export a GIF of it?

    - by Truth
    I have the following CSS3 Animation going on: http://dabblet.com/gist/2884702. It's basically a simulation of a mirror rotating on its x-axis. Now, I wish to present that in a PowerPoint presentation. Since PowerPoint doesn't have the webkit engine, I want to extract an animated GIF image of that animation, and embed it into my presentation. The problem? No matter what I've tried, I couldn't make a reasonably smooth animated GIF. I've Googled and found many free software which claim to do the job, tried several, none worked as expected. I've tried IrfanView, same issue (also their site makes me want to vomit). So, is there a solution? Or am I doomed to not be able to display it?

    Read the article

  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

    - by Justin Garrison
    In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation. As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • Interface to collect successful remote backups status

    - by Aseques
    I would like to deploy into our infrastructure a web interface that could register when the copies are finished and if for some reason they haven't. The current issue is that we are doing on site backups for customers, for each backup a mail is sent ad the end of the backup, the problems is that sometimes the mail isn't sent for a variety of reasons: System doesn't have internet Backup system crashed before sending the mail etc.. What I'd like to do is to have a web interface that the backup software cant visit after doing the backup (either if it's a success or a fail), that acknowledges that the backup has finished, after some time, I'd like to receive a report of the machines that hadn't done the backup. Is there anything remotely similar to this that I could use/adapt to our environment? UPDATE: Just found out this (paessler.com) that seems to be a privative solution of what I intended.

    Read the article

  • Disadvantages of enabling AHCI after Win7 install

    - by Mario De Schaepmeester
    I've formatted my notebook that has a 5400RPM HDD with ~500GB capacity. After installing Windows 7 and about half the drivers (including chipset) I began to doubt whether to go for IDE or AHCI mode for my hard drive. There used to be a lot of discussion on the internet which is better and so far I understood it was particularly helpful on SSDs. Now the general consensus seems to be that AHCI mode is best for most hard drives. I have thus enabled AHCI in the middle of configuring my notebook (rest of the drivers, necessary software etc...) Two questions: considering my HDD's spec above, should I leave it on? Is there any disadvantage of enabling it after Windows 7 and chipset drivers installation? Windows 7 version is 64 bit Home Premium.

    Read the article

  • Gain More From Your Oracle Investments

    - by Oracle OpenWorld Blog Team
    By Yaldah Hakim, Oracle Managed Cloud ServicesOracle Managed Cloud Services enables organizations to leverage their Oracle investments by extending them into the cloud—for greater value, choice, and confidence. At Oracle OpenWorld, Oracle Managed Cloud Services has numerous activities and educational sessions planned so you can explore how your organization will benefit from the power of Oracle software and hardware in the cloud.Here are just a few of the Oracle Managed Cloud Services breakout sessions you can attend Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} : Moving into the Cloud with Oracle Cloud Services Upgrade your Oracle Applications into the Cloud Cloud Services: Security and Compliance in the Cloud  And don’t forget to check out the Oracle Cloud Services Lounge at Moscone West Level 3, where you can schedule one-on-one meetings with the cloud services experts.  Lounge Hours:Monday, October 1: 10:00 a.m. - 6:00 p.m.Tuesday, October 2: 10:00 a.m. - 6:00 p.m.Wednesday, October 3: 10:00 a.m. - 4:00 p.m.Thursday, October 4: 10:00 a.m. - 2:00 p.m. For a schedule of all Managed Cloud Services activities at Oracle OpenWorld, go here.

    Read the article

  • Deciding on a company-wide javascript strategy [on hold]

    - by drogon
    Our company is moving most of its software from thick-client winforms apps to web apps. We are using asp.net mvc on the server side. Most of the developers are brand new to the web and need to become efficient and knowledgeable at writing client-side web code (javascript). We are deciding on a number of things and would appreciate feedback on the following: Angular.js or Backbone.js? Backbone (w/ Underscore) is certainly more light weight, but requires more custom development. Angular seems to be a full-fledged framework, but would require everyone to embrace it and probably a longer learning curve(??). (Note: I know nothing about Angular at this point) Require.js or script includes w/ MVC bundleconfig? Require.js makes development "feel like" c# (importing namespaces). But, integrating the build/minification process can be a pain (especially the configuration). Bundling via mvc requires developers to worry more about which scripts to include but has less overall development friction. Typescript vs Javascript Regardless of frameworks, our developers are going to need to learn the basics. Typescript is more like c# and MAY be easier for c# developers to understand. However, learning TypeScript before javascript may hinder their mastery of javascript at the expense of efficiency.

    Read the article

  • Attributes of an Ethical Programmer?

    - by ahmed
    Software that we write has ramifications in the real world. If not, it wouldn't be very useful. Thus, it has the potential to sweep across the world faster than a deadly manmade virus or to affect society every bit as much as genetic manipulation. Maybe we can't see how right now, but in the future our code will have ever-greater potential for harm or good. Of course, there's the issue of hacking. That's clearly a crime. Or is it that clear? Isn't hacking acceptable for our government in the event of national security? What about for other governments? Cases of life-and-death emergency? Tracking down deadbeat parents? Screening the genetic profile of job candidates? Where is the line drawn? Who decides? Do programmers have responsibility for how their code is used? What if a programmer writes code to pry into confidential information or copy-protected material? Does he bear responsibility along with the person who used the program? What about a programmer who knowingly or unknowingly writes code to "fix the books?" Should he be liable?

    Read the article

  • LINQ and ArcObjects

    - by Marko Apfel
    Motivation LINQ (language integrated query) is a component of the Microsoft. NET Framework since version 3.5. It allows a SQL-like query to various data sources such as SQL, XML etc. Like SQL also LINQ to SQL provides a declarative notation of problem solving – i.e. you don’t need describe in detail how a task could be solved, you describe what to be solved at all. This frees the developer from error-prone iterator constructs. Ideally, of course, would be to access features with this way. Then this construct is conceivable: var largeFeatures = from feature in features where (feature.GetValue("SHAPE_Area").ToDouble() > 3000) select feature; or its equivalent as a lambda expression: var largeFeatures = features.Where(feature => (feature.GetValue("SHAPE_Area").ToDouble() > 3000)); This requires an appropriate provider, which manages the corresponding iterator logic. This is easier than you might think at first sight - you have to deliver only the desired entities as IEnumerable<IFeature>. LINQ automatically establishes a state machine in the background, whose execution is delayed (deferred execution) - when you are really request entities (foreach, Count (), ToList (), ..) an instantiation processing takes place, although it was already created at a completely different place. Especially in multiple iteration through entities in the first debuggings you are rubbing your eyes when the execution pointer jumps magically back in the iterator logic. Realization A very concise logic for constructing IEnumerable<IFeature> can be achieved by running through a IFeatureCursor. You return each feature via yield. For an easier usage I have put the logic in an extension method Getfeatures() for IFeatureClass: public static IEnumerable<IFeature> GetFeatures(this IFeatureClass featureClass, IQueryFilter queryFilter, RecyclingPolicy policy) { IFeatureCursor featureCursor = featureClass.Search(queryFilter, RecyclingPolicy.Recycle == policy); IFeature feature; while (null != (feature = featureCursor.NextFeature())) { yield return feature; } //this is skipped in unit tests with cursor-mock if (Marshal.IsComObject(featureCursor)) { Marshal.ReleaseComObject(featureCursor); } } So you can now easily generate the IEnumerable<IFeature>: IEnumerable<IFeature> features = _featureClass.GetFeatures(RecyclingPolicy.DoNotRecycle); You have to be careful with the recycling cursor. After a delayed execution in the same context it is not a good idea to re-iterated on the features. In this case only the content of the last (recycled) features is provided and all the features are the same in the second set. Therefore, this expression would be critical: largeFeatures.ToList(). ForEach(feature => Debug.WriteLine(feature.OID)); because ToList() iterates once through the list and so the the cursor was once moved through the features. So the extension method ForEach() always delivers the same feature. In such situations, you must not use a recycling cursor. Repeated executions of ForEach() is not a problem, because for every time the state machine is re-instantiated and thus the cursor runs again - that's the magic already mentioned above. Perspective Now you can also go one step further and realize your own implementation for the interface IEnumerable<IFeature>. This requires that only the method and property to access the enumerator have to be programmed. In the enumerator himself in the Reset() method you organize the re-executing of the search. This could be archived with an appropriate delegate in the constructor: new FeatureEnumerator<IFeatureclass>(_featureClass, featureClass => featureClass.Search(_filter, isRecyclingCursor)); which is called in Reset(): public void Reset() { _featureCursor = _resetCursor(_t); } In this manner, enumerators for completely different scenarios could be implemented, which are used on the client side completely identical like described above. Thus cursors, selection sets, etc. merge into a single matter and the reusability of code is increasing immensely. On top of that in automated unit tests an IEnumerable could be mocked very easily - a major step towards better software quality. Conclusion Nevertheless, caution should be exercised with these constructs in performance-relevant queries. Because of managing a state machine in the background, a lot of overhead is created. The processing costs additional time - about 20 to 100 percent. In addition, working without a recycling cursor is fast a performance gap. However declarative LINQ code is much more elegant, flawless and easy to maintain than manually iterating, compare and establish a list of results. The code size is reduced according to experience an average of 75 to 90 percent! So I like to wait a few milliseconds longer. As so often it has to be balanced between maintainability and performance - which for me is gaining in priority maintainability. In times of multi-core processors, the processing time of most business processes is anyway not dominated by code execution but by waiting for user input. Demo source code The source code for this prototype with several unit tests, you can download here: https://github.com/esride-apf/Linq2ArcObjects. .

    Read the article

  • Windows 7 wifi reports "no network access" and "no internet access" but connects in fedora

    - by rick2047
    I am running windows 7 home basic (64bit) on a Acer 5742G laptop with Atheos wifi adapter in it. Yesterday, I hiberneted my computer as I always do and up untill then the wifi was working fine. When I booted my computer up again today I started having a strange problem: It detects my wifi but after connecting to it, it keeps on oscillating between states of no network access and no internet access. I can't connect to anything (the internet or my router). I tried to reset my internet protocol stack using this fixit file. I also tried to uninstall and reinstall my network driver. Neither helped. I am using the same laptop's fedora installation right now and the wifi is working perfectly fine. Please help. Edit To add additional details, I have Microsoft Security essentials as my antivirus software and I haven't messed with the firewall or the router configurations.

    Read the article

  • Capture documents in bitonal, or grayscale then downsample

    - by Jason R. Coombs
    I'm about to embark on a document archival process. I'm going to spend a lot of good money to archive some paper (actually microfiche) to TIFF images. I have a choice of 300-dpi bitonal (2-bit, black/white) or 300-dpi grayscale (8-bit). Cost is the same for either format. Data volume (and thus image size) is not a factor. It seems to me that the grayscale, since scanned at the same resolution as the bitonal, would always contain more information and could always be downsampled to the equivalent bitonal image. Are there any downsides to selecting grayscale, and then later downsampling to bitonal if desired? In other words, is it possible that the scanning software will perform a more accurate (or more legible) representation than a grayscale image converted to bitonal?

    Read the article

  • Slackware 12 - installed cairo but cannot be seen

    - by piro
    Hi. I wanted to install gtk+ 2.16.5, so i also installed glib, pango and cairo. All seemed to work well, except for cairo. At first I got an error while configuring: Requested 'cairo = 1.6' but version of cairo is 1.4.12 I installed the newest version of cairo without any problems, i rebooted the comp and when i started the configure again the same thing happened and it showed me the same error. I also can see this: Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables BASE_DEPENDENCIES_CFLAGS and BASE_DEPENDENCIES_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. Can someone help me ? Thanks.

    Read the article

  • Is email forwarding to the sender's address usually blocked in Mail servers / MTA ?

    - by codecowboy
    I've noticed that email forwarding to an address seems not to work if I send an email from the address to which I am forwarding email. This happens for GMail and Fasthosts mail servers. e.g I send an email to [email protected] from [email protected] , [email protected] is set to forward to [email protected] and the email never arrives. I realise this seems logical but it is a potential cause of confusion when testing email functionality in a web application (for me, anyway ;-). I would just like to know if this is standard for all MTA software so I can avoid confusing myself.

    Read the article

  • How to reliably synchronise file servers between London and Shanghai?

    - by Andy S
    We have two offices, one in London and one in Shanghai, each needing to be able to access the same set of files. This means we need a solid, speedy means of synchronising a set of folders between servers at either office. They're likely to be Windows servers, but we could look at Linux boxes if the software side makes more sense on *nix. We've considered Rsync, Unison, Gluster, and a few other options, but none of them seem capable of reliably keeping the servers in sync between such distant office locations. Each office is on DSL connectivity over the open internet, so encryption is also a factor. Does anyone have any hints for getting the servers synchronising in as close to real time as possible, without dying constantly? Andy

    Read the article

  • HDMI connection does not support HDCP

    - by mroggi
    Hi, My problem: error message when playing blu-ray movies stating that the HDCP encryption could not be established. My setup: A new projector (EPSON EMP TW700) with a HDCP-compliant HDMI port a PC with a brand-new graphics adapter (Sapphire HD 4350 512MB DDR2) supporting HDCP Connection made with a DVI cable (it's installed in my wall) and a DVI-HDMI adapter to connect the projector Latest drivers and software My questions: What can I do to establish the HDCP connection? Would it help to use the HDMI output of the graphics adapter instead of the DVI (could it be that the HDCP chip is only supported on HDMI?) Any other ideas? I am very thankful for any hint.

    Read the article

  • Proper way to rotate Nginx logs

    - by depesz
    I would like to achieve rotation of nginx logs that: would work without any extra software (i.e. - best if without "logrotate") would create rotated files with names based on date Best approach is something like PostgreSQL has - i.e. in it's log_filename config variable I can specify strftime-style %Y-%m-%d, and it will automatically change log on date (or time) change. Another approach from apache - sending logs via pipe to rotatelogs program. As far as I was able to search - no such approach exists. All I can do, is to use logrotate with dateext option, but it has it's own set of drawbacks, and I'd rather use something that works like |rotatelogs or log_filename in PostgreSQL.

    Read the article

  • How to handle fine grained field-based ACL permissions in a RESTful service?

    - by Jason McClellan
    I've been trying to design a RESTful API and have had most of my questions answered, but there is one aspect of permissions that I'm struggling with. Different roles may have different permissions and different representations of a resource. For example, an Admin or the user himself may see more fields in his own User representation vs another less-privileged user. This is achieved simply by changing the representation on the backend, ie: deciding whether or not to include those fields. Additionally, some actions may be taken on a resource by some users and not by others. This is achieved by deciding whether or not to include those action items as links, eg: edit and delete links. A user who does not have edit permissions will not have an edit link. That covers nearly all of my permission use cases, but there is one that I've not quite figured out. There are some scenarios whereby for a given representation of an object, all fields are visible for two or more roles, but only a subset of those roles my edit certain fields. An example: { "person": { "id": 1, "name": "Bob", "age": 25, "occupation": "software developer", "phone": "555-555-5555", "description": "Could use some sunlight.." } } Given 3 users: an Admin, a regular User, and Bob himself (also a regular User), I need to be able to convey to the front end that: Admins may edit all fields, Bob himself may edit all fields, but a regular User, while they can view all fields, can only edit the description field. I certainly don't want the client to have to make the determination (or even, for that matter, to have any notion of the roles involved) but I do need a way for the backend to convey to the client which fields are editable. I can't simply use a combination of representation (the fields returned for viewing) and links (whether or not an edit link is availble) in this scenario since it's more finely grained. Has anyone solved this elegantly without adding the logic directly to the client?

    Read the article

  • IRQ Conflicts Causing Video Card and Boot Problems?

    - by sanpatricio
    tl;dr - I have 4 devices sharing 1 IRQ. Is this bad and how do I tell the BIOS to stop it? Background: I have an old Dell GX280 dual Pentium 4 that I (semi) resurrected last weekend with an installation of Ubuntu 12.04. Everything was going fine the first several hours until a problem that plagued me when WinXP was on that machine happened -- it froze. Completely froze. None of the myriad of ways I have found here on askubuntu helped me to regain control except a long-press of the power button to shut it off. Clearly, this wasn't a software/WinXP issue. After much googling, I found that hardware conflicts can often cause this sort of total lock-up and with all the odd blocks of yellow and flecks of color showing on my screen (both WinXP and Ubuntu) I figured my old GeForce 7600 was failing and causing me these odd issues. (A good canned-air dusting of the entire interior fixed the color fleck problem) Again, through much googling and numerous answers found on askubuntu, I somehow stumbled my way onto the lshw command. After going through it, line by line, I found that I have four devices sharing IRQ 16: eth0, wlan0, ide0 (DVD-RW), and my video card. In hindsight, I can recall weird instances of my Ethernet connection to another computer not working when I thought it should. I never full troubleshot those issues so it could be a coincidence. The other thing that has been plaguing me since installing Ubuntu (wasn't there during WinXP) has been periodic moments of my monitor getting no signal from Ubuntu during boot. The first couple days, it would disappear after the Dell boot screen and reappear at Ubuntu login. Now, it disappears after the Dell boot screen and doesn't return at all -- I have to hit F12 where I can load a safe mode version of Ubuntu and get more details like dmesg and lsdev. I also ran memtest86 overnight and woke up to zero errors, so failing RAM is out. Where do I go from here?

    Read the article

  • script to automatically test if a web site is available

    - by Xoundboy
    I'm a lone web developer with my own Centos VPS hosting a few small web sites for my clients. Today I discovered my httpd service had stopped (for no apparent reason - but that's another thread). I restarted it but now I need to find a way that I can be notified by email and/or SMS if it happens again - I don't like it when my client rings me to tell me their web site doesn't work! I know there are probably many different possibilities, including server monitoring software. I think all I really need is a script that I can run as a cron job from my dev host (which is permanently running in my office) that attempts to load a page from my production server and if it doesn't load within say 30 seconds then it sends me an email or SMS. I'm pretty rubbish at shell scripting, hence this question. Any suggestions would be gratefully appreciated, thanks to all you clever sysadmin guys and girls out there :)

    Read the article

  • Lock Windows keyboard and mouse but still display screen normally

    - by Stephen Lacy
    I'm using windows 7, I have a dual monitor display. It displays important information related to the business, I'd rather that random users that walk in can't just walk over to it and start using the computer with the same access rights as the user the monitoring software is running as. What I would like is if any time someone presses a button on the keyboard including alt-ctrl-delete all that would appear is a dialog asking for a password. Then I can click cancel and it will return to showing the data I want displayed. ClearLock doesn't work I tried it btw

    Read the article

  • High-res icon in Windows Vista alt-tab thumbnail preview?

    - by netvope
    I have customized my alt-tab screen with the following: [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\AltTab] "OverlayIconPx"=dword:00000040 "MaxThumbSizePx"=dword:00000100 "MinThumbSizePcent"=dword:00000064 It works great: the thumbnail becomes 256 pixel wide and the icon at the corner of the thumbnail becomes 64x64 pixels. However, Windows doesn't load the high-res icons from the programs; instead, it uses the 16x16 pixel icon and scaled it up by nearest-neighbor. I'm sure the programs has high-res icons because I saw them with in "Extra Large Icon" view in Explorer. So the question is: How can I force Windows to load the high-res icons for the alt-tab thumbnail preview? (Perhaps a registry key, or a .dll hack/injection?)

    Read the article

  • Translating Documents from a Foreign Language into English on my Computer?

    - by Simon
    I am aware that websites can be translated from many languages into English thanks to Google Translate. If I receive documentation via email that is in a language other than English, how straight forward is it to translate into English on my PC or Apple Mac? (indeed is Google Translate involved or is it strictly for websites) Similarly if I receive documentation via the normal postal mail service (termed "snail mail" if I'm correct) which needs to be translated into English, what steps need to be taken for this documentation to be translated effectively & quickly on my PC or Apple Mac (I am aware of the term Optical Character Recognition (OCR) software, and is this costly, or do free alternatives exist to carry out the translation process solely online? ).

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • Sample domain model for online store

    - by Carel
    We are a group of 4 software development students currently studying at the Cape Peninsula University of Technology. Currently, we are tasked with developing a web application that functions as a online store. We decided to do the back-end in Java while making use of Google Guice for persistence(which is mostly irrelevant for my question). The general idea so far to use PHP to create the website. We decided that we would like to try, after handing in the project, and register a business to actually implement the website. The problem we have been experiencing is with the domain model. These are mostly small issues, however they are starting to impact the schedule of our project. Since we are all young IT students, we have virtually no experience in the business world. As such, we spend quite a significant amount of time planning the domain model in the first place. Now, some of the issues we're picking up is say the reference between the Customer entity and the order entity. Currently, we don't have the customer id in the order entity and we have a list of order entities in the customer entity. Lately, I have wondered if the persistence mechanism will put the client id physically in the order table, even if it's not in the entity? So, I started wondering, if you load a customer object, it will search the entire order table for orders with the customer's id. Now, say you have 10 000 customers and 500 000 orders, won't this take an extremely long time? There are also some business processes that I'm not completely clear on. Finally, my question is: does anyone know of a sample domain model out there that is similar to what we're trying to achieve that will be safe to look at as a reference? I don't want to be accused of stealing anybody's intellectual property, especially since we might implement this as a business.

    Read the article

  • Is it possible to upload only files that have been updated into a server?

    - by kamikaze_pilot
    Hi guys, Suppose I have a server accessible via FTP and it hosts websites Suppose I want to edit the website locally so it wont affect the site live, and suppose I edit a whole bunch of files, and I don't want to deal with the hassle of keeping track of which files I've edited all the time... Once I finished editing I want to upload it to the server via FTP....is there some FTP software that automatically detects which files have been edited and have only those files uploaded and overwritten rather than having me manually choosing the files I've edited (and hence having to keep track of edited files) or have me upload the entire site which is a waste of time thanks in advance

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >