Search Results

Search found 62069 results on 2483 pages for 'unix time'.

Page 302/2483 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Where should you put constants and why?

    - by Tim Meyer
    In our mostly large applications, we usually have a only few locations for constants: One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes. In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once. Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-)

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Fast Data - Big Data's achilles heel

    - by thegreeneman
    At OOW 2013 in Mark Hurd and Thomas Kurian's keynote, they discussed Oracle's Fast Data software solution stack and discussed a number of customers deploying Oracle's Big Data / Fast Data solutions and in particular Oracle's NoSQL Database.  Since that time, there have been a large number of request seeking clarification on how the Fast Data software stack works together to deliver on the promise of real-time Big Data solutions.   Fast Data is a software solution stack that deals with one aspect of Big Data, high velocity.   The software in the Fast Data solution stack involves 3 key pieces and their integration:  Oracle Event Processing, Oracle Coherence, Oracle NoSQL Database.   All three of these technologies address a high throughput, low latency data management requirement.   Oracle Event Processing enables continuous query to filter the Big Data fire hose, enable intelligent chained events to real-time service invocation and augments the data stream to provide Big Data enrichment. Extended SQL syntax allows the definition of sliding windows of time to allow SQL statements to look for triggers on events like breach of weighted moving average on a real-time data stream.    Oracle Coherence is a distributed, grid caching solution which is used to provide very low latency access to cached data when the data is too big to fit into a single process, so it is spread around in a grid architecture to provide memory latency speed access.  It also has some special capabilities to deploy remote behavioral execution for "near data" processing.   The Oracle NoSQL Database is designed to ingest simple key-value data at a controlled throughput rate while providing data redundancy in a cluster to facilitate highly concurrent low latency reads.  For example, when large sensor networks are generating data that need to be captured while analysts are simultaneously extracting the data using range based queries for upstream analytics.  Another example might be storing cookies from user web sessions for ultra low latency user profile management, also leveraging that data using holistic MapReduce operations with your Hadoop cluster to do segmented site analysis.  Understand how NoSQL plays a critical role in Big Data capture and enrichment while simultaneously providing a low latency and scalable data management infrastructure thru clustered, always on, parallel processing in a shared nothing architecture. Learn how easily a NoSQL cluster can be deployed to provide essential services in industry specific Fast Data solutions. See these technologies work together in a demonstration highlighting the salient features of these Fast Data enabling technologies in a location based personalization service. The question then becomes how do these things work together to deliver an end to end Fast Data solution.  The answer is that while different applications will exhibit unique requirements that may drive the need for one or the other of these technologies, often when it comes to Big Data you may need to use them together.   You may have the need for the memory latencies of the Coherence cache, but just have too much data to cache, so you use a combination of Coherence and Oracle NoSQL to handle extreme speed cache overflow and retrieval.   Here is a great reference to how these two technologies are integrated and work together.  Coherence & Oracle NoSQL Database.   On the stream processing side, it is similar as with the Coherence case.  As your sliding windows get larger, holding all the data in the stream can become difficult and out of band data may need to be offloaded into persistent storage.  OEP needs an extreme speed database like Oracle NoSQL Database to help it continue to perform for the real time loop while dealing with persistent spill in the data stream.  Here is a great resource to learn more about how OEP and Oracle NoSQL Database are integrated and work together.  OEP & Oracle NoSQL Database.

    Read the article

  • TechEd 2010 Important Events

    If youll be attending TechEd in New Orleans in a couple of weeks, make sure the following are all on your calendar:   Party with Palermo TechEd 2010 Edition Sunday 6 June 2010 7:30-930pm Central Time RSVP and see who else is coming here.  The party takes place from 730pm to 930pm Central (Local) Time,  and includes a full meal, free swag, and prizes.  The event is being held at Jimmy Buffetts Margaritaville located at 1104 Decatur Street.   Developer Practices Session: DPR304 FAIL: Anti-Patterns and Worst Practices Monday 7 June 2010 4:30pm-545pm Central Time Room 276 Come to my session and hear about what NOT to do on your software project.  Hear my own and others war stories and lessons learned.  Youll laugh, youll cry, youll realize youre a much better developer than a lot of folks out there.  Heres the official description: Everybody likes to talk about best practices, tips, and tricks, but often it is by analyzing failures that we learn from our own and others' mistakes. In this session, Steve describes various anti-patterns and worst practices in software development that he has encountered in his own experience or learned about from other experts in the field, along with advice on recognizing and avoiding them. View DPR304 in TechEd Session Catalog >> Exhibition Hall Reception Monday 7 June 2010 545pm-9pm Immediately following my session, come meet the shows exhibitors, win prizes, and enjoy plenty of food and drink.  Always a good time.   Party: Geekfest Tuesday 8 June 8pm-11pm Central Time, Pat OBriens Lets face it, going to a technical conference is good for your career but its not a whole lot of fun. You need an outlet. You need to have fun. Cheap beer and lousy pizza (with a New Orleans twist) We are bringing back GeekFest! Join us at Pat OBriens for a night of gumbo, beer and hurricanes. There are limited invitations available, so what are you waiting for? If you are attending the TechEd 2010 conference and you are a developer, you are invited. To register pick up your "duck" ticket (and wristband) in the TechEd Technical Learning Center (TLC) at the Developer Tools & Languages (DEV) information desk. You must have wristband to get in. Tuesday, June 8th from 8pm 11pm Pat OBriens New Orleans 624 Bourbon Street New Orleans, LA 70130 Closing Party at Mardi Gras World Thursday 10 June 730pm-10pm Central Time Join us for the Closing Party and enjoy great food, beverages, and the excitement of New Orleans at Mardi Gras World. The colors, the lights, the music, the joie de vivreits all here.  Learn more >> Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Managing software projects - advice needed

    - by Callum
    I work for a large government department as part of an IT team that manages and develops websites as well as stand alone web applications. We’re running in to problems somewhere in the SDLC that don’t rear their ugly head until time and budget are starting to run out. We try to be “Agile” (software specifications are not as thorough as possible, clients have direct access to the developers any time they want) and we are also in a reasonably peculiar position in that we are not allowed to make profit from the services we provide. We only service the divisions within our government department, and can only charge for the time and effort we actually put in to a project. So if we deliver a project that we have over-quoted on, we will only invoice for the actual time spent. Our software specifications are not as thorough as they could be, but they always include at a minimum: Wireframe mockups for every form view A data dictionary of all field inputs Descriptions of any business rules that affect the system Descriptions of the outputs I’m new to software management, but I’ve overseen enough software projects now to know that as soon as users start observing demos of the system, they start making a huge amount of requests like “Can we add a few more fields to this report.. can we redesign the look of this interface.. can we send an email at this part of the workflow.. can we take this button off this view.. can we make this function redirect to a different screen.. can we change some text on this screen… can we create a special account where someone can log in and get access to X… this report takes too long to run can it be optimised.. can we remove this step in the workflow… there’s got to be a better image we can put here…” etc etc etc. Some changes are tiny and can be implemented reasonably quickly.. but there could be up to 50-100 or so of such requests during the course of the SDLC. Other change requests are what clients claim they “just assumed would be part of the system” even if not explicitly spelled out in the spec. We are having a lot of difficulty managing this process. With no experienced software project managers in our team, we need to come up with a better way to both internally identify whether work being requested is “out of spec”, and be able to communicate this to a client in such a manner that they can understand why what they are asking for is “extra” work. We need a way to track this work and be transparent with it. In the spirit of Agile development where we are not spec'ing software systems in to the ground and back again before development begins, and bearing in mind that clients have access to any developer any time they want it, I am looking for some tips and pointers from experienced software project managers on how to handle this sort of "scope creep" problem, in tracking it, being transparent with it, and communicating it to clients such that they understand it. Happy to clarify anything as needed. I really appreciate anyone who takes the time to offer some advice. Thanks.

    Read the article

  • MySQL for Excel new features (1.2.0): Save and restore Edit sessions

    - by Javier Rivera
    Today we are going to talk about another new feature included in the latest MySQL for Excel release to date (1.2.0) which can be Installed directly from our MySQL Installer downloads page.Since the first release you were allowed to open a session to directly edit data from a MySQL table at Excel on a worksheet and see those changes reflected immediately on the database. You were also capable of opening multiple sessions to work with different tables at the same time (when they belong to the same schema). The problem was that if for any reason you were forced to close Excel or the Workbook you were working on, you had no way to save the state of those open sessions and to continue where you left off you needed to reopen them one by one. Well, that's no longer a problem since we are now introducing a new feature to save and restore active Edit sessions. All you need to do is in click the options button from the main MySQL for Excel panel:  And make sure the Edit Session Options (highlighted in yellow) are set correctly, specially that Restore saved Edit sessions is checked: Then just begin an Edit session like you would normally do, select the connection and schema on the main panel and then select table you want to edit data from and click over Edit MySQL Data. and just import the MySQL data into Excel:You can edit data like you always did with the previous version. To test the save and restore saved sessions functionality, first we need to save the workbook while at least one Edit session is opened and close the file.Then reopen the workbook. Depending on your version of Excel is where the next steps are going to differ:Excel 2013 extra step (first): In Excel 2013 you first need to open the workbook with saved edit sessions, then click the MySQL for Excel Icon on the the Data menu (notice how in this version, every time you open or create a new file the MySQL for Excel panel is closed in the new window). Please note that if you work on Excel 2013 with several workbooks with open edit sessions each at the same time, you'll need to repeat this step each time you open one of them: Following steps:  In Excel 2010 or previous, you just need to make sure the MySQL for Excel panel is already open at this point, if its not, please do the previous step specified above (Excel 2013 extra step). For Excel 2010 or older versions you will only need to do this previous step once.  When saved sessions are detected, you will be prompted what to do with those sessions, you can click Restore to continue working where you left off, click Discard to delete the saved sessions (All edit session information for this file will be deleted from your computer, so you will no longer be prompted the next time you open this same file) or click Nothing to continue without opening saved sessions (This will keep the saved edit sessions intact, to be prompted again about them the next time you open this workbook): And there you have it, now you will be able to save your Edit sessions, close your workbook or turn off your computer and you will still be able to reopen them in the future, to continue working right where you were. Today we talked about how you can save your active Edit sessions and restore them later, this is another feature included in the latest MySQL for Excel release (1.2.0). Please remember you can try this product and many others for free downloading the installer directly from our MySQL Installer downloads page.Happy editing !

    Read the article

  • TechEd 2010 Important Events

    If youll be attending TechEd in New Orleans in a couple of weeks, make sure the following are all on your calendar:   Party with Palermo TechEd 2010 Edition Sunday 6 June 2010 7:30-930pm Central Time RSVP and see who else is coming here.  The party takes place from 730pm to 930pm Central (Local) Time,  and includes a full meal, free swag, and prizes.  The event is being held at Jimmy Buffetts Margaritaville located at 1104 Decatur Street.   Developer Practices Session: DPR304 FAIL: Anti-Patterns and Worst Practices Monday 7 June 2010 4:30pm-545pm Central Time Room 276 Come to my session and hear about what NOT to do on your software project.  Hear my own and others war stories and lessons learned.  Youll laugh, youll cry, youll realize youre a much better developer than a lot of folks out there.  Heres the official description: Everybody likes to talk about best practices, tips, and tricks, but often it is by analyzing failures that we learn from our own and others' mistakes. In this session, Steve describes various anti-patterns and worst practices in software development that he has encountered in his own experience or learned about from other experts in the field, along with advice on recognizing and avoiding them. View DPR304 in TechEd Session Catalog >> Exhibition Hall Reception Monday 7 June 2010 545pm-9pm Immediately following my session, come meet the shows exhibitors, win prizes, and enjoy plenty of food and drink.  Always a good time.   Party: Geekfest Tuesday 8 June 8pm-11pm Central Time, Pat OBriens Lets face it, going to a technical conference is good for your career but its not a whole lot of fun. You need an outlet. You need to have fun. Cheap beer and lousy pizza (with a New Orleans twist) We are bringing back GeekFest! Join us at Pat OBriens for a night of gumbo, beer and hurricanes. There are limited invitations available, so what are you waiting for? If you are attending the TechEd 2010 conference and you are a developer, you are invited. To register pick up your "duck" ticket (and wristband) in the TechEd Technical Learning Center (TLC) at the Developer Tools & Languages (DEV) information desk. You must have wristband to get in. Tuesday, June 8th from 8pm 11pm Pat OBriens New Orleans 624 Bourbon Street New Orleans, LA 70130 Closing Party at Mardi Gras World Thursday 10 June 730pm-10pm Central Time Join us for the Closing Party and enjoy great food, beverages, and the excitement of New Orleans at Mardi Gras World. The colors, the lights, the music, the joie de vivreits all here.  Learn more >> Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Errors with nginx: proc must be mounted. Why do these happen?

    - by Crashalot
    We see these errors in our nginx log error file. What causes them, and what is the right way to fix them on an ongoing basis (as opposed to running "mount /proc /proc -t proc")? We're on nginx 1.4.1 and Passenger 4.0.5. [ 2013-06-28 08:22:17.7621 2739/7ff58d8f3700 Pool2/SmartSpawner.h:301 ]: Preloader for /home/p/p started on PID 5181, listening on unix:/tmp/passenger.1.0.2735/generation-0/backends/preloader.5181 [ 2013-06-28 12:59:31.1651 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] Error: /proc must be mounted [ 2013-06-28 12:59:31.1651 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] To mount /proc at boot you need an /etc/fstab line like: [ 2013-06-28 12:59:31.1651 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] /proc /proc proc defaults [ 2013-06-28 12:59:31.1651 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] In the meantime, run "mount /proc /proc -t proc" [ 2013-06-28 12:59:31.1652 2739/7ff58d871700 Pool2/Spawner.h:739 ]: [App 19777 stdout] [ 2013-06-28 12:59:34.6642 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] Error: /proc must be mounted [ 2013-06-28 12:59:34.6643 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] To mount /proc at boot you need an /etc/fstab line like: [ 2013-06-28 12:59:34.6643 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] /proc /proc proc defaults [ 2013-06-28 12:59:34.6643 2739/7ff59078f700 Pool2/Spawner.h:159 ]: [App 19777 stderr] In the meantime, run "mount /proc /proc -t proc" [ 2013-06-28 12:59:34.6651 2739/7ff58d871700 Pool2/SmartSpawner.h:301 ]: Preloader for /home/p/p started on PID 19777, listening on unix:/tmp/passenger.1.0.2735/generation-0/backends/preloader.19777

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • Linux Distro for Beginners

    - by XLR3204S
    Well... I know that's the question arising all over the Internet, but I couldn't find an answer to suit me after googling for quite some time. I'd like to get a Linux distribution, and start learning using the CLI. I'm looking for a distribution already having GNOME installed, as I'll be using Linux-Command.org as my learning resource, and I'm not very familiar with CLI-based web browsers. I'd mainly like to get to know my way around a UNIX-based system, and then I think I'd like to pick up a CLI-only distribution, and start doing more complex stuff. I've tried Ubuntu, Fedora Core, OpenSolaris and FreeBSD (the last two aren't linux distros, I know). Ubuntu and FC are fine, they do come with Firefox, but I'm not really sure they're meant for learning purposes. OpenSolaris was OK as well, but I haven't got to play with it enough. FreeBSD 7.2 did not want to install itself on my 13" MacBook Pro, it generated a kernel panic everytime while copying the files to the disk. So to sum this up, I'm trying to learn Linux, and I'm willing to invest time into this (that is, not giving up when the first problems arise). I also have intermediate knowledge of C++, if it helps, and I'm also using the CLI-vim to write small C++ CLI-based programs, so text editing should be any problem. And... speaking of Macs, how am I going to be limited if I try to learn how to use UNIX-based systems using the OS X Terminal? It uses bash 3.2, isn't this the same shell as the one found on most of the Linux machines? How does the fact that OS X is based on FreeBSD 4.4, if I'm not mistaking, affect this? Thanks in advance, and hopefully, I'll have a starting point ASAP.

    Read the article

  • Xen HVM networking wont work

    - by Nathan
    I'm trying to get a Xen HVM network working using route however I am failing. Xen PV works fine using Ubuntu but when installing Ubuntu on HVM it fails to pick up the network. I'll let you know now that I'm not that experienced with Xen so I would appreciate any help. vm104 is the HVM thats causing me the problems, here is the configs that I believe should help resolve the problem. [root@eros vm104]# cat vm104.cfg import os, re arch = os.uname()[4] if re.search('64', arch): arch_libdir = 'lib64' else: arch_libdir = 'lib' kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' memory = 6000 shadow_memory = '8' cpu_weight = 256 name = 'vm104' vif = ['type=ioemu, ip=85.25.x.y, vifname=vifvm104.0, mac=00:16:3e:52:3d:fe, bridge=xenbr0'] acpi = 1 apic = 1 vnc = 1 vcpus = 4 vncdisplay = 3 vncviewer = 0 vncconsole = 1 vnclisten = '217.118.x.y' vncpasswd = 'kCfb5S4tE7' serial = 'pty' disk = ['phy:/dev/vpsvg/vm104_img,hda,w', 'file:/home/solusvm/xen/iso/Windows-Server-2008-RC2.iso,hdc:cdrom,r'] device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm' boot = 'cd' sdl = '0' usbdevice = 'tablet' pae=1 [root@eros /]# cat /etc/xen/xend-config.sxp | egrep -v "(^#.*|^$)" (xend-unix-server yes) (xend-unix-path /var/lib/xend/xend-socket) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-route) (vif-script vif-route) (network-script 'network-route netdev=eth0') (dom0-min-mem 256) (dom0-cpus 0) (vnc-listen '0.0.0.0') (vncpasswd '') (keymap 'en-us') The Windows install will not pick up the network - I've tried setting the IP manually by using the Xen servers IP as the gateway and setting the main IP in Windows but no luck. If anyone needs any more information let me know and I appreciate any input!

    Read the article

  • Adding PHP to Apache

    - by user528451
    Where I work, we use ancient technology that belongs in a museum. Further I have to get everything done through system admins. They are telling me in order to get PHP, they will need to upgrade the operating system as well as the Apache version. lcas100[67]% uname -a Linux lcas100 2.6.9-11.ELsmp #1 SMP Fri May 20 18:26:27 EDT 2005 i686 i686 i386 GNU/Linux lcas100[68]% cat /etc/*-release LSB_VERSION="1.3" Red Hat Enterprise Linux AS release 4 (Nahant) lcas100[75]% /ots/apache/bin/httpd -v Server version: Apache/1.3.31 (Unix) Server built: Nov 3 2004 18:47:31 This doesn't make sense to me because apparently Apache 1.3.x supports PHP: http://php.net/manual/en/install.unix.apache.php Furthermore, we have another machine that runs PHP and is running the exact same OS and OS version. The reason I want it on the former machine is because it is mounted on a different file system. Lastly they tell me that all software the Apache webserver runs will need to be reinstalled/recompiled (assuming an Apache upgrade WAS needed). I am not even sure about this. Are they full of it? Thanks

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Cannot connect on TFS 2012 server through SSL with invalid certificate

    - by DaveWut
    I saw the problem on some forums and even here, but not as specific as mine. So here's the thing, So I've configured a TFS 2012 server, on one of my personnel server at home, and now, I'm trying to make it available through the internet, with the help of apache2 on a different UNIX based, physical server. The thing is working perfectly, I don't have any problem accessing the address https://tfs.something.com/tfs through my browser. The address can be pinged and I do have access to the TFS control panel through it. How does it work? Well, with apache2 you can set a virtual host and set up the ProxyPass and ProxyPassReserver setting, so the traffic can externally comes from a secure SSL connection, through a specified domain or sub-domain, but it can be locally redirect on a clear http session on a different port. This is my current setup. As I already said, I can access the web interface, but when I'm trying to connect with Visual Studio 2012, it can't be done. Here's the error I receive: http://i.imgur.com/TLQIn.png The technical information tells me: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. My SSL certificate is invalid and was automatically generated on my UNIX server. Even if I try to add it in the Trusted Root Certification Authorities either on my TFS server or on my local workstation, it doesn't work. I still receive the same error. Is there's a way to completely ignore certificate validation? If not, what's have I done? I mean, I've added the certificate in the trusted root certificates, it should works as mentioned on some forums... If you need more information, please ask me, I'll be pleased to provide you more. Dave

    Read the article

  • Exporting Environment Variables in Ubuntu Linux

    - by stanigator
    I know many people have asked about environment variables before, but I am having a hard time dealing with these paths while ensuring I don't mess around with the original settings. How would you go about executing these commands in Ubuntu in terms of environment variables? Thanks in advance! Please put /home/stanley/Downloads/ns-allinone-2.34/bin:/home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/unix:/home/stanley/Downloads/ns-allinone-2.34/tk8.4.18/unix into your PATH environment; so that you'll be able to run itm/tclsh/wish/xgraph. IMPORTANT NOTICES: (1) You MUST put /home/stanley/Downloads/ns-allinone-2.34/otcl-1.13, /home/stanley/Downloads/ns-allinone-2.34/lib, into your LD_LIBRARY_PATH environment variable. If it complains about X libraries, add path to your X libraries into LD_LIBRARY_PATH. If you are using csh, you can set it like: setenv LD_LIBRARY_PATH If you are using sh, you can set it like: export LD_LIBRARY_PATH= (2) You MUST put /home/stanley/Downloads/ns-allinone-2.34/tcl8.4.18/library into your TCL_LIBRARY environmental variable. Otherwise ns/nam will complain during startup.

    Read the article

  • How do I SSH tunnel using PuTTY or SecureCRT through gateway/proxy to development server?

    - by DAE51D
    We have some unix boxes setup in a way that to get to the development box via ssh, you have to ssh into a 'user@jumpoff' box first. There is no direct connection allowed on 'dev' via ssh from anywhere but 'jumpoff'. Furthermore, only key exchange is allowed on both servers. And you always login to the development box as 'build@dev'. It's painful to always do that hopping. I know this can be done with SOCKS or a Tunnel or something... I have setup a FreeBSD VM and I can get things to work awesome using unix ssh tools. Basically all I do is make sure my vm's ~/.ssh/id_rsa.pub key is on both jumpoff and dev and use this ~/.ssh/config file: # Development Server Host ext-dev # this must be a resolvable name for "dev" from Jumpoff Hostname 1.2.3.4 User build IdentityFile ~/.ssh/id_rsa # The Jumpoff Server Host ext Hostname 1.1.1.1 User daevid Port 22 IdentityFile ~/.ssh/id_rsa # This must come below all of the above Host ext-* ProxyCommand ssh ext nc $(echo '%h'|cut -d- -f2-) 22 Then I just simply type "ssh ext-dev" and I'm in like Flynn. The problem is I can't get this same thing to work using either PuTTY or SecureCRT -- and to be honest I've not found any tutorials that really walk me through it. I see many on setting up some kind of proxy tunnel for Firefox, but it doesn't seem to be the same concept. I've been messing with various trial and error most all day and nothing has worked (obviously) and I'm at the end of my ssh knowledge and Google searching. I found this link which seemed to be perfect, but it doesn't work for me. The "Master" connects fine, but the "client" portion doesn't connect. It tells me, the remote system refused the connection. http://www.vandyke.com/support/tips/socksproxy.html I've got the VM, PuTTY and SecureCRT all using the same public/private key pairs to make things consistent and easier to debug. Does anyone have a straight up example of how to do this in Windows?

    Read the article

  • osx bash grep - finding search terms in a large file with one single line

    - by unsynchronized
    Is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term, even if there is only one "line" in a very large text file? Ok, this should be easy. Famous last words. I'm not that familiar with grep, but it seems it is mainly used to filter out lines in the input that contain search terms. I have a very large json file that I downloaded that i want to search for a particular term. before you click the link - it's over 244MB so be warned - it is from the internet wayback machine and contains lists of zip files of archived photos. i am trying to find mine. Their web interface is broken, so i found the json file that they make public here - it's the last one on the list. when i grep looking for my username, it finds it, but proceeds to dump that line to the console. the problem is that line is 244MB long, and it's the only line in the file. i tried using less, but could not get that to do much - it's very slow, and seems to have the same issue. is there simple unix command line i can enter which lets me isolate say 512 bytes either side of a search term?

    Read the article

  • QNAP NAS 509 (LINUX) - how to unmout busy volume and find physical disk?

    - by Horst Walter
    On my NAS QNAP TS 509 I do have a technical issue. I need to run e2fsck. This works fine for me on md0 (see below), but how can I unmount the busy devices md9 and sda4 in order to do the same. Whenever I try, I fail because the device is busy. [This part is solved, see below] In order to further track down the issue, I'd need to sort out the physical disk to device relationship. How can I find out this, e.g. md0 is a stripped volume on 2 disk (but I need to find out on what physical disk). Remark: As you can easily derive from my questions, I am not a Linux expert, but manage to get along. /dev/ram0 124.0M 94.1M 29.8M 76% / tmpfs 32.0M 80.0k 31.9M 0% /tmp /dev/sda4 310.0M 103.9M 206.1M 34% /mnt/ext /dev/md9 509.5M 39.2M 470.2M 8% /mnt/HDA_ROOT /dev/md0 1.8T 1.4T 444.7G 76% /share/MD0_DATA tmpfs 32.0M 0 32.0M 0% /.eaccelerator.tmp -- Added -- QNAP seems to be based on Busybox. I do not find something like init / telinit / runlevel. At busybox docs it says that I need to run the below. But in /var/service sv is not available. I want to go to single user mode to unmount the devices. # cd /var/service # sv d * # sv u getty* -- Added, thanks A4L -- This QNAP Box runs a special flavor of Linux, so not all SOPs do apply. In my particular case I found a services.sh script, stopping all services. After that the drive could be unmounted. The information passed by A4L is valid and worth reading it, maybe I'll profit from it next time. Links: http://unix.stackexchange.com/questions/19918/umount-device-is-busy and http://unix.stackexchange.com/questions/15024/umount-device-is-busy-why So the unmount issue is solved, still looking for the best option to find the physical to volume mapping.

    Read the article

  • SSH client and Command Prompt replacements Windows look-and-feel

    - by Oddthinking
    The Problem I've worked exclusively in Windows. I can handle that. I've worked exclusively in DOS (a long time ago!). I can handle that. I've worked exclusively in Unix. I can handle that. Right now, I am developing a command-line (python) application on a Windows machine, testing it in a DOS box (i.e. Windows' Command prompt), and then deploying it to Linux, and running it with PuTTY. I cannot handle that. My productivity drops dramatically when CTRL-C cuts in one window (Windows) and kills the process in another (DOS, Linux). My productivity drops dramatically when Enter copies the selection in one window (DOS), and deletes the selection in another (Windows), and runs the current half-edited command in the third (PuTTY). My productivity drops dramatically when I cannot hit Undo, Home or End. The Solution I am Seeking An SSH/Bash command-line client that runs on Windows and, to the extent possible, uses all the standard Windows shortcuts (Cut, Copy, Paste, Undo, Home, End, Insert, Shift-Arrows, etc.) work on a bash command line. Bonus points if it puts the cursor between letters, rather than on them. Plus, an equivalent DOS command-line drop-in that runs on Windows, and provides the same interface. I appreciate there may need to be special buttons to actually transfer CTRL codes (like CTRL-C) through in the cases I need them. I suspect the SSH client will need to be specific to a shell (so it knows when it is at the command prompt, and when it is inside a running app.) I know there are many SSH clients, but I am looking for advice for a particular need. PuTTY feels like an escape route for Unix programmers stuck on Windows. I am the opposite. Can anyone recommend one (or maybe a combination of an SSH client and an Command-Line replacement)?

    Read the article

  • Should windows services be created with custom users, or should I use one of LocalSystem/LocalServic

    - by Justin Dearing
    I'm asking the question in general for the average custom developed NT service or unix OSS daemon ported to windows with SCM support. However, at the moment my immediate concern is for mongodb. From my experience with UNIX I like all my services to run as different unprivileged users. The way this has translated to windows is as follows: Create a local (or domain if it has to talk to SQL server) windows user with a long random password (lately an ASCII85 encoded guid generated from a different machine). Set it to next expire and forbid it from changing its password. Remove that user from the "Users Group". Grant that user "Login as a Service" permission. Give it read permission to the folder where the app resides, and write permission to the logs and data files the applications use. Assign the user to the service. Troubleshoot until the service starts. My feeling is that the unprivileged users are less powerful than the 3 special service users. I also feel that by isolating which users run which services, I would limit collateral damage if a way to compromise one service was found.

    Read the article

  • get ubuntu terminal to send an escape sequence (control+shift+up)

    - by user62046
    This problem starts when I use emacs ( with -nw option). Let me first explain it. I tried to define hotkey (for emacs) as following (global-set-key [(control shift up)] 'other-window) but it doesn't work (no error, just doesn't work), neither does (global-set-key [(control shift down)] 'other-window) But (global-set-key [(control shift right)] 'other-window) and (global-set-key [(control shift left)] 'other-window) work! But because the last two key combinations are used by emacs (as default), I don't wanna change them for other functions. So how could I make control-shift-up and control-shift-down work? I have googled "(control shift up)", it seems that control-shift-up is used by other people, (but not very few results). In the Stack Overflow forum, Gille answered me as following: Ctrl+Shift+Up does send a signal to your computer, but your terminal emulator is apparently not transmitting any escape sequence for it. So your problem is in two parts. First you must get your terminal emulator to send an escape sequence, which depends on your terminal emulator, and is Super User material, or Unix.SE if you're using a unix system. Then you need to declare the escape sequence in Emacs, and my answer explains that part So I come here for this question: How do I get my terminal (I use ubuntu 10.04, and the built-in terminal) to send an escape sequence for Control+Shift+Up Control+Shift+down

    Read the article

  • samba "username map" stopped to work

    - by Kris_R
    It was time to upgrade our group server (new HDs, problems with old installation of DRBD, etc..). Going as usually for CentOS i upgraded whole system from 6.3 to 6.4 The later one came with samba 3.6 as the old one was 3.5. I transferred most of users by copying /etc/password, /etc/shadow and samba accounts with pdbedit. Homes were on nfs-drive. The translation of unix accounts to samba accounts are located in /etc/samba/smbusers. Strangely enough on some windows clients there was problem to connect to samba-shares. In one case the only thing that worked was, instead of giving windows name, to use the unix account. In another one, it was possible to mount network drive and to open it in Windows Explorer, however other applications like "Total commander" at the attempt of opening this drive gave the message "Cannot connect to z:" (sometimes at this moment user/pass were requested). The smb.conf has following entries: [global] security = user passdb backend = tdbsam username map = /etc/samba/smbusers ... [Kris] comment = Kris's Private path = /SMB/Users/Kris writeable = yes read only = no browseable = yes users = krisr printable = no security mask = 0777 force security mode = 0 directory security mask = 0777 force directory security mode = 0 force create mode = 0775 force directory mode = 6775 The smbusers: # Unix_name = SMB_name1 SMB_name2 ... krisr = Kris Of course testparm runs without any errors. I was used from samba 3.5 to outputs of form Mapped user Kris to krisr. Nothing like this happens now. Just message check_sam_security: Couldn't find user Kris in passdb. I read on web that some guys had problem with 3.6 and security = ADS, but these were not helpful for me. I'm seriously thinking about downgrading back to samba 3.5 but before this step I wanted to ask if somebody knows the solution of these problems. p.s. i've asked this question at serverfault but no answer came. Maybe I have more luck with this forum. Sorry for duplicate if any of you reads both.

    Read the article

  • Passenger connection reset by peer issue

    - by user887372
    I am new to ruby on rails. I am using passenger 3.0.17 to deploy my ruby 3.2.6 project. My project is working fine but i got 500 internal error when i try to upload files on server. I checked my passenger log and found: [ pid=20654 thr=140394143790848 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:57.82 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) 2012/11/01 09:29:27 [crit] 20691#0: *431 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/2" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/jquery.js?body=1 HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" 2012/11/01 09:29:33 [crit] 20691#0: *435 mkdir() "/tmp/passenger-standalone.20640/proxy_temp/3" failed (2: No such file or directory) while reading upstream, client: 124.172.71.55, server: _, request: "GET /assets/background.png HTTP/1.1", upstream: "passenger:unix:/passenger_helper_server:", host: "test.com:3000", referrer: "http://test.com:3000/" [ pid=20654 thr=140394115462912 file=ext/nginx/HelperAgent.cpp:933 time=2012-11-01 09:29:33.543 ]: Uncaught exception in PassengerServer client thread: exception: write() failed: Connection reset by peer (104) backtrace: in 'void Client::forwardResponse(Passenger::SessionPtr&, Passenger::FileDescriptor&, const Passenger::AnalyticsLogPtr&)' (HelperAgent.cpp:705) in 'void Client::handleRequest(Passenger::FileDescriptor&)' (HelperAgent.cpp:859) in 'void Client::threadMain()' (HelperAgent.cpp:952) Please guide me regarding the issue. I am unable to find the reason of this peer reset and failied mkdir(). Thanks in advance

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >