Search Results

Search found 25534 results on 1022 pages for 'write powershell'.

Page 544/1022 | < Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >

  • If statement causing xna sprites to draw frame by frame

    - by user1489599
    I’m a bit new to XNA but I wanted to write a simple program that would fire a cannon ball from a cannon at a 45 degree angle. It works fine outside of my keyboard i/o if statement, but when I encapsulate the code around an if statement checking to see if the user hits the space bar, the sprite will draw one frame at a time every time the space bar is hit. This is the code in question if (currentKeyboardState.IsKeyUp(Keys.Space) && previousKeyboardState.IsKeyDown(Keys.Space) && !skullBall.Alive) { //works outside the keyboard input if statement //{ skullBall.Position = cannon.Position; skullBall.DeltaY = -(float)(Math.Sin(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time + 0.5 * (gravity * (time * time))); skullBall.DeltaX = (float)(Math.Cos(MathHelper.ToRadians(45)) * 50/*39.7577*/ * time); skullBall.Alive = true; //} } The skull ball represents the cannon ball and the cannon is just the starting point. DeltaX and DeltaY are the values I’m using to update the cannon balls position per update. I know it's dumb to have the cannon ball start at the cannons position every time the update is called but it’s not really noticeable right now. I was wondering if after examining my code, if anyone noticed any errors that would cause the sprite to display frame by frame instead of drawing it as a full animation of the cannon ball leaving the cannon and moving from there.

    Read the article

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Drivers for Ubuntu 13.10 [on hold]

    - by Fernando De Souza Martins
    I just installed Ubuntu 13.10, my screen resolution is not fitting my screen as the ubuntu interface is all around stretching over the screen, so i thought i might install nvidia's driver that i know can let me adjust the exact resolution i need. So i began a 2 hour quest, i downloaded the driver hoping i would have a wizard to instal it, but yeah, so i tried to do a bit of research and i found that feature, i think its called in english additional drivers, but it wont show the nvidia drivers, i tried the terminal, but once i write the commands i found it asks for a password but i cant type anything once the password is asked. So, my question, obviously, how do i install this driver? I am not sure if this is appropriate, but why doesnt ubuntu have a wizard to install things? I feel like im working for the OS, when it should be the other way around, but i love the concept of linux, so im pushing forward and trying to use it. Another thing is, i had to install a bunch of drivers and applications for the drivers in windows, do i need to install any other driver? I cant change my mouse's sensibility in the os, it seems, so how do i do it? I'm sorry i'm asking all of this, but it seems necessary.

    Read the article

  • Disable disk caches in AWS EBS for PostgreSQL?

    - by Alexandr Kurilin
    It's my understanding that, without correctly disabling OS-level and drive-level caching, there is a chance that in case of system failure the Write-Ahead Log might not be saved correctly and in fact might get corrupted, possibly preventing data recovery. I've already made sure that wal_sync_method=fdatasync however I was unable to make any configuration changes with hdparm since I get the following: $ sudo htparm -I /dev/xvdf /dev/xvdf: HDIO_DRIVE_CMD(identify) failed: Invalid argument Looks like that option is not available in the kind of setup you get in EC2. Am I missing anything here? Are there any other obvious caches I have to disable to ensure the WAL's safety?

    Read the article

  • Get to Know a Candidate (6 of 25): Jill Stein&ndash;Green Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Stein is a physician with degrees from Harvard College and Harvard Medical School.  She serves on the boards of Greater Boston Physicians for Social Responsibility and MassVoters for Fair Elections, and has been active with the Massachusetts Coalition for Healthy Communities Jill Stein advocates a "Green New Deal" in which renewable energy jobs would be created to address climate change and environmental issues with the objective of employing "every American willing and able to work". Citing the research of Dr. Phillip Harvey, Professor of Law & Economics at Rutgers University, as evidence of the successful economic effects of the 1930s' New Deal projects, Stein would fund the plan with a 30% reduction in the U.S. military budget, returning US troops home, and increasing taxes on areas such as capital gains, offshore tax havens and multimillion dollar real estate. Stein plans on impacting what she sees as a growing convergence of environmental crises in water, soil, fisheries and forests, through the creation of sustainable infrastructure based in clean renewable energy generation and sustainable communities principles such as increasing intra-city mass transit and inter-city railroads, creating 'complete streets' that safely encourage bike and pedestrian traffic and regional food systems based on sustainable organic agriculture The Green Party of the United States was founded in 1991 as a voluntary association of state green parties. With its founding, the Green Party of the United States became the primary national Green organization in the United States, eclipsing the Greens/Green Party USA, which emphasized non-electoral movement building. The Green Party of the United States of America emphasizes environmentalism, non-hierarchical participatory democracy, social justice, respect for diversity, peace and nonviolence. Their "Ten Key Values," which are described as non-authoritative guiding principles, are as follows: Grassroots democracy Social justice and equal opportunity Ecological wisdom Nonviolence Decentralization Community-based economics Feminism and gender equality Respect for diversity Personal and global responsibility Future focus and sustainability The Green Party does not accept donations from corporations. Thus, the party's platforms and rhetoric critique any corporate influence and control over government, media, and American society at large. Stein has access to 403 electoral votes and is a write-in candidate in GA, IN, and MS Learn more about Jill Stein and Green Party on Wikipedia.

    Read the article

  • How to REALLY start thinking in terms of objects?

    - by Mr Grieves
    I work with a team of developers who all have several years of experience with languages such as C# and Java. Most of them are young enough to have been shown OOP as a standard way to develop software in university and are very comfortable with concepts such as inheritance, abstraction, encapsulation and polymorphism. Yet, many of them, and I have to include myself, still tend to create classes which are meant to be used in a very functional fashion. The resulting software is often several smaller classes which correctly represent business objects which get passed through larger classes which only supply ways to modify and use those objects (functions). Large complex difficult-to-maintain classes named Manager are usually the result of such behaviour. I can see two theoretical reasons why people might write this type of code: It's easy to start thinking of everything in terms of the database Deep down, for me, a computer handling a web request feels more like a functional operation than an object oriented operation when you think about Request Handlers, Threads, Processes, CPU Cores and CPU operations... I want source code which is easy to read and easy to modify. I have seen excellent examples of OO code which meet these objectives. How can I start writing code like this? How I can I really start thinking in an object oriented fashion? How can I share such a mentality with my colleagues?

    Read the article

  • IIS's SMTP Pickup timing

    - by fatcat1111
    I have IIS's SMTP server set up as a closed relay, and it's working nicely. I also have an application that writes EML files. If the EML files are written to a temporary directory, then moved to the server's Pickup directory, email is sent as expected. However, if I have the application write the EML files directly to the Pickup directory, the email will often fail to send. This seems to be a race condition: the server starts processing the EML file as soon as it detects it in Pickup, even though the application hasn't completed writing it. The result is the server considers the EML to be malformed, and it punts it to Badmail. While I very much appreciate the server's earnestness, it seems that I need to dial it back a bit for this scenario. Does anybody know if IIS's SMTP server's polling frequency can be configured? I am using IIS7, Windows Server 2008 R2. The application that writes the EML cannot be modified.

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • How should I host our scalable worker processes?

    - by Pieter Breed
    We are designing a new architecture for an enterprise business. The principles we've followed so far is not to develop what you can (possible buy and) deploy, ie, don't reinvent any wheels. In this way we've decided on CQRS, RabbitMQ, Riak and a bunch of other things. We still need to write /some/ business code though and these will be in the form of worker processes, which will consume commands from a message queue and after any side-effects, produce events onto another message queue. The idea behind this is that via the competing-consumers design we will have a scalable design right out of the box. One option is of writing a management infrastructure that will know how to: deploy code instantiate processes kill processes update configuration etc IE provide fault tolerance and scalability. Also, this is exactly what something like GAE and Heroku does for you, but in a public setting and in our organization, public is bad. My question is, is there an out-of-the-box solution that we can use to host our consumers in? Like a private cloud or private platform-as-a-service. Private Heroku or GAE. Is there some kind of software or software product with which we can do all of these things and thereby get scalability and fault tolerance over our consumers?

    Read the article

  • Working with Reporting Services Filters – Part 2: The LIKE Operator

    - by smisner
    In the first post of this series, I introduced the use of filters within the report rather than in the query. I included a list of filter operators, and then focused on the use of the IN operator. As I mentioned in the previous post, the use of some of these operators is not obvious, so I'm going to spend some time explaining them as well as describing ways that you can use report filters in Reporting Services in this series of blog posts. Now let's look at the LIKE operator. If you write T-SQL queries, you've undoubtedly used the LIKE operator to produce a query using the % symbol as a wildcard for multiple characters like this: select * from DimProduct where EnglishProductName like '%Silver%' And you know that you can use the _ symbol as a wildcard for a single character like this: select * from DimProduct where EnglishProductName like '_L Mountain Frame - Black, 4_'   So when you encounter the LIKE operator in a Reporting Services filter, you probably expect it to work the same way. But it doesn't. You use the * symbol as a wildcard for multiple characters as shown here: Expression Data Type Operator Value [EnglishProductName] Text Like *Silver* Note that you don’t have to include quotes around the string that you use for comparison. Books Online has an example of using the % symbol as a wildcard for a single character, but I have not been able to successfully use this wildcard. If anyone has a working example, I’d love to see it!

    Read the article

  • Regular Expressions Cookbook Ebook Deal of the Day

    - by Jan Goyvaerts
    Every day O’Reilly has an “ebook deal of the day” offering one or a bunch of their books in electronic format for only $9.99. Twice this year I received an email from O’Reilly notifying me that Regular Expressions Cookbook was on sale. But each time the email was sent on the morning of the day itself. When it’s morning in California it’s already bedtime for me here in Thailand. So I never saw the emails until the next day, making it rather pointless to blog about the deal. But this time O’Reilly has listened to my request for advance notification. I just got an email this morning saying Regular Expressions Cookbook will be part of the Ebook Deal of the Day for 15 September 2010. That’s 15 September on the US west coast. When I write this there’s a few hours to go before the deal starts at one past midnight California time. You can get any O’Reilly Cookbook as an ebook for only $9.99. The normal price for Regular Expressons Cookbook as an ebook is $31.99. The download includes the book in PDF, ePub, Mobi (for Kindle), DAISY, and Android formats.

    Read the article

  • Dynamic procmail filters

    - by WombaT
    i need procmail to place incoming mail into specific folder depending on some set of rules. I know how i can accomplish this, but i need to write static set of rules in a specific file. What i really need is to configure procmail to use rules stored in mysql database. How i can do this? I've read a bit about that and one solution i found is to pipe message to a php/perl script and return a folder name to place message. But i have completely no i idea how to use php script as a rule and then use its return value.

    Read the article

  • Dynamic filter expressions in an OpenAccess LINQ query

    We had some support questions recently where our customers had the need to combine multiple smaller predicate expressions with either an OR or an AND  logical operators (these will be the || and && operators if you are using C#). And because the code from the answer that we sent to these customers is very interesting, and can easily be refactorred into something reusable, we decided to write this blog post. The key thing that one must know is that if you want your predicate to be translated by OpenAccess ORM to SQL and executed on the server you must have a LINQ Expression that is not compiled. So, let’s say that you have these smaller predicate expressions: Expression<Func<Customer, bool>> filter1 = c => c.City.StartsWith("S");Expression<Func<Customer, bool>> filter2 = c => c.City.StartsWith("M");Expression<Func<Customer, bool>> filter3 = c => c.ContactTitle == "Owner"; And ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Sending UDP/514 data magically appears in syslog without rsyslog running

    - by ale
    I’m using a programming language without a library to log to rsyslog over UDP. I thought I was going to need to write a library but I discovered something weird. If I send data on UDP/514 with the port open on the server then the data appears in the server’s syslog. rsyslogd isn’t running so syslog isn’t doing this. Data doesn’t get formatted into a syslog message so rsyslogd really isn’t doing this (only raw text enters syslog). Linux must see the data coming in on this port and know that it should go into /var/log/messages? If I do the same on another port (e.g. UDP/515) then nothing appears in the log! What is doing this? Some CentOS feature? The kernel?

    Read the article

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Script language native extensions - avoiding name collisions and cluttering others' namespace

    - by H2CO3
    I have developed a small scripting language and I've just started writing the very first native library bindings. This is practically the first time I'm writing a native extension to a script language, so I've run into a conceptual issue. I'd like to write glue code for popular libraries so that they can be used from this language, and because of the design of the engine I've written, this is achieved using an array of C structs describing the function name visible by the virtual machine, along with a function pointer. Thus, a native binding is really just a global array variable, and now I must obviously give it a (preferably good) name. In C, it's idiomatic to put one's own functions in a "namespace" by prepending a custom prefix to function names, as in myscript_parse_source() or myscript_run_bytecode(). The custom name shall ideally describe the name of the library which it is part of. Here arises the confusion. Let's say I'm writing a binding for libcURL. In this case, it seems reasonable to call my extension library curl_myscript_binding, like this: MYSCRIPT_API const MyScriptExtFunc curl_myscript_lib[10]; But now this collides with the curl namespace. (I have even thought about calling it curlmyscript_lib but unfortunately, libcURL does not exclusively use the curl_ prefix -- the public APIs contain macros like CURLCODE_* and CURLOPT_*, so I assume this would clutter the namespace as well.) Another option would be to declare it as myscript_curl_lib, but that's good only as long as I'm the only one who writes bindings (since I know what I am doing with my namespace). As soon as other contributors start to add their own native bindings, they now clutter the myscript namespace. (I've done some research, and it seems that for example the Perl cURL binding follows this pattern. Not sure what I should think about that...) So how do you suggest I name my variables? Are there any general guidelines that should be followed?

    Read the article

  • nvidia-settings makes one of my dual monitors grey and useless, disables network

    - by Kerrick
    I'm running Ubuntu 12.04 64-bit, Precise Pangolin, with a PNY GTS 250 1GB video card and a monitor plugged into each of the DVI ports. I'm using the proprietary drivers (post-release updates). If I set anything to do with Separate X Screens up in nvidia-settings (and write it to xorg.conf and reboot), my second monitor has a grey background, no menu bar, no ability to have a window on it, the second monitor doesn't get picked up in a screneshot, and if I move my mouse cursor to it it's an ugly black X. Plus, my network is unable to connect to anything. If I subsequently delete /etc/X11/xorg.conf and reboot, everything goes back to working, albeit with a single monitor activated. If I set anything to do with TwinView up in nvidia-settings, my second monitor starts working, but it isn't seen as a second monitor by Ubuntu, so I can't apply color calibration to it separately. Plus, my mouse gets "caught" between the monitors every time I try to move my cursor between the two. What gives? If it helps, this is the xorg.conf that nvidia-settings generates for Separate X Screens.

    Read the article

  • Can no longer boot with rEFIt and Grub on early 2006 MacBook Pro

    - by Don Quixote
    I don't know what happened to cause this. I have Snow Leopard, Ubuntu 11.04 Natty Narwhal and Windows XP SP3 on my early 2006 MacBook Pro. It is a Core Duo unit, NOT Core 2 Duo, so it is 32-bit only - Model Identifier MacBookPro1,1. I use rEFIt 0.14 for my boot menu. For some reason neither XP nor Ubuntu would boot anymore. I'd just get a black screen with a rapidly flashing underscore in the top-left corner. Having both those OSes failing to boot suggested a problem with the boot loader in my MBR. The rEFIT partition tool verified that my MBR partitions were still synced with my GPT partitions, so I rewrote my MBR partition table with fdisk while booted from Parted Magic: # fdisk /dev/sda (fdisk warns about the disk having a GPT. I press on anyway.) p (Print the existing partition table to make sure it's OK.) w (Write the old partition table back to disk. This also writes a new MBR boot loader.) After this XP would boot but Ubuntu would not, with the same symptom. Now I used update-grub while chrooted into Ubuntu from Parted Magic: # mount /dev/sda3 /mnt # mount --bind /dev /mnt/dev # mount --bind /sys /mnt/sys # mount --bind /proc /mnt/proc # chroot /mnt Chroot issues some warnings about not being able to identify some group IDs. I don't know why that happens, or whether it is a problem. At this point while I am still booted off of Parted Magic's kernel, I am running from Natty's filesystem. # update-grub Update-grub detects each of my operating systems then claims to complete successfully, but still won't boot. I asked this same question over at rEFIt's Sourceforge support forum but there have been no replies yet. I also Googled quite a bit, and see many who have the same black screen problem, but none of their situations seem quite like mine. Thanks for any help you can give me. -- Don Quixote

    Read the article

  • What is the best way to track / record the current programming project you work on? [duplicate]

    - by user2424160
    This question already has an answer here: Methodology for Documenting Existing Code Base 6 answers When do you start documenting the code? 13 answers Where should a programmer explain the extended logic behind the code? 5 answers I have been in this problem for long time and I want to know how it's done in real / big companies project? Suppose I have the project to build a website. Now I divide the project into sub tasks and do it. But you know that suppose I have task1 in hand like export the page to pdf. Now I spend 3 days to do that , came across various problems, many Stack Overflow questions and in the end I solve it. Now 4 months after someone told me that there is some error in the code. Now by that I completely forgot about (60%) of how I did it and why I do this way. I document the code but I can't write the whole story of that in the code. Then I have to spend much time on code to find what was the problem so that I added this line etc. I want to know that is there any way that i can log steps in completing the project. So that I can see how I end up with code, what errors I got, what questions I asked on SO and etc. How people do it in real time? Which software to use? I know in our project management software called JIRA we have tasks but that does not cover what steps I took to solve that tasks. What is the best way so that when I look back at my 2 year old project, I know how I solve particular task?

    Read the article

  • Laptop stopped recognizing USB hard drive

    - by vahokif
    Hi, My Packard Bell EasyNote TX86 laptop stopped recognizing my 1 TB Toshiba Store Art hard drive. It worked fine until now, and it still works on other computers. Other USB devices (including storage) work, and I've tried plugging it in every port, to no avail. When I plug it in it spins up, but Windows doesn't react at all (it's not in disk management), Linux doesn't write anything in dmesg and I can't see it in BIOS setup. I didn't use it at all today, apart from plugging it into a freshly-installed Windows 7 machine once (where it worked). What can I do? Which device is to blame here? EDIT: One more thing. I unplugged the drive while the laptop was hibernated. Google says this might be the problem and it might have something to do with resetting the USB Host Controller.

    Read the article

  • Restoring permissions on Windows 2008

    - by Andrey
    I have played with folder permissions due to SVN not being able to write to a folder and now I got into a state where I go to any folder of C: drive in Windows Explorer and when I right-click it takes 30 seconds to show the context menu and it just hangs the window after that. It definitely has something to do with permissions as it was all working fine until I started tweaking permissions about an hour ago. My login belongs to two groups Users and Administrators. I changed ownership of C drive to Administrators group and I think it screwed everything, but I can't change it back because I don't even remember what it was :) Oh, and only Administrators group has access to drive C now. Any way to reset permissions to some previous state or some workable state?

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • On RouterOS, how will transparent proxying (with DNAT) affect reporting of netflows?

    - by Tim
    I have a box running Mikrotik RouterOS, which is set up to do transparent web proxying, as described here. In short, this means that I have a firewall rule for destination NAT causing any port 80 traffic to get redirected to port 8080 on the router, which is received by the Mikrotik local web proxy. The local web proxy then makes the web request on the client's behalf, in this case to a parent web proxy server (which in turn does the real web request). My question is, how will this two-part process get reported in the logging of traffic flow information (netflows)? Looking at the logged information, what I seem to be seeing is this: One flow recorded from client machine (private IP) to remote proxy (8080) Another flow recorded from router to remote proxy (8080) The original request that the client made to port 80 isn't recorded. I want to write code to analyse traffic usage, so I want to be sure I'm not losing information if I discard the latter of these.

    Read the article

< Previous Page | 540 541 542 543 544 545 546 547 548 549 550 551  | Next Page >